UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Progress in Self-Certified Neural Networks

Perez-Ortiz, M; Rivasplata, O; Parrado-Hernandez, E; Guedj, B; Shawe-Taylor, J; (2021) Progress in Self-Certified Neural Networks. In: Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021). NeurIPS Green open access

[thumbnail of 2111.07737v1.pdf]
Preview
Text
2111.07737v1.pdf - Accepted Version

Download (599kB) | Preview

Abstract

A learning method is self-certified if it uses all available data to simultaneously learn a predictor and certify its quality with a tight statistical certificate that is valid on unseen data. Recent work has shown that neural network models trained by optimising PAC-Bayes bounds lead not only to accurate predictors, but also to tight risk certificates, bearing promise towards achieving self-certified learning. In this context, learning and certification strategies based on PAC-Bayes bounds are especially attractive due to their ability to leverage all data to learn a posterior and simultaneously certify its risk with a tight numerical certificate. In this paper, we assess the progress towards self-certification in probabilistic neural networks learnt by PAC-Bayes inspired objectives. We empirically compare (on 4 classification datasets) classical test set bounds for deterministic predictors and a PAC-Bayes bound for randomised self-certified predictors. We first show that both of these generalisation bounds are not too far from out-of-sample test set errors. We then show that in data starvation regimes, holding out data for the test set bounds adversely affects generalisation performance, while self-certified strategies based on PAC-Bayes bounds do not suffer from this drawback, proving that they might be a suitable choice for the small data regime. We also find that probabilistic neural networks learnt by PAC-Bayes inspired objectives lead to certificates that can be surprisingly competitive with commonly used test set bounds.

Type: Proceedings paper
Title: Progress in Self-Certified Neural Networks
Event: NeurIPS 2021 workshop: Bayesian Deep Learning
Open access status: An open access version is available from UCL Discovery
Publisher version: https://papers.nips.cc/paper/2021
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences > Dept of Mathematics
URI: https://discovery.ucl.ac.uk/id/eprint/10138739
Downloads since deposit
16Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item