Find me on twitter, github and linkedin or contact me at [email protected].
I recently finished my PhD in Machine Learning (FAI CDT) at University College London advised by Benjamin Guedj. Previously I completed the UCL Machine Learning MSc and the Physics Tripos at Cambridge. I am broadly interested in understanding how and when machine learning methods work, with the motivation of making them safer and more predictable.
I am actively seeking a next role, ideally one with a technical focus on responsible and safe AI. If you would like to collaborate or discuss further, don’t hesitate to get in touch!
MMD-FUSE: Learning and
Combining Kernels for Two-Sample Testing Without Data
Splitting.
Felix Biggs†, Antonin Schrab†, Arthur Gretton.
Neural Information Processing Systems (NeurIPS) 2023.
Spotlight * [arXiv:2306.08777]
Tighter
PAC-Bayes Generalisation Bounds by Leveraging Example
Difficulty.
Felix Biggs, Benjamin Guedj.
Artificial Intelligence and Statistics (AISTATS), 2023. [arXiv:2210.11289]
On
Margins and Generalisation for Voting Classifiers.
Felix Biggs, Valentina Zantedeschi, Benjamin
Guedj.
Neural Information Processing Systems (NeurIPS) 2022. [arXiv:2206.04607]
Non-Vacuous
Generalisation Bounds for Shallow Networks.
Felix Biggs, Benjamin Guedj.
International Conf. Machine Learning (ICML) 2022. [arXiv:2202.01627]
On
Margins and Derandomisation in PAC-Bayes.
Felix Biggs, Benjamin Guedj.
Artificial Intelligence and Statistics (AISTATS), 2022. [arXiv:2107.03955]
A Note on the
Efficient Evaluation of PAC-Bayes Bounds.
Felix Biggs.
Preprint. [arXiv:2209.05188]
Differentiable
PAC–Bayes Objectives with Partially Aggregated Neural
Networks.
Felix Biggs, Benjamin Guedj.
Entropy 2021, 23, 1280. [doi:10.3390/e23101280]
† = equal contribution. * = about 3% of NeurIPS submissions receive a spotlight.