UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Making Neural Networks Confidence-Calibrated and Practical

Ferianc, Martin; (2024) Making Neural Networks Confidence-Calibrated and Practical. Doctoral thesis (Ph.D), UCL (University College London). Green open access

[thumbnail of PhD_Thesis.pdf]
Preview
Text
PhD_Thesis.pdf - Accepted Version

Download (22MB) | Preview

Abstract

Neural networks (NNs) have become powerful tools due to their predictive accuracy. However, NNs' real-world applicability depends on accuracy and the alignment between confidence and accuracy, known as confidence calibration. Bayesian NNs (BNNs) and NN ensembles achieve good confidence calibration but are computationally expensive. In contrast, pointwise NNs are computationally efficient but poorly calibrated. Addressing these issues, this thesis proposes methods to enhance confidence calibration while maintaining or improving computational efficiency. For users preferring pointwise NNs, we propose methodology for regularising the NNs' training by using single or multiple artificial noises to improve confidence calibration and accuracy relative to standard training up to 12% without additional operations at runtime. For users able to modify the NN architecture, we propose the Single Architecture Ensemble (SAE) framework, which generalises multi-input and multi-exit architectures to embed multiple predictors into a single NN, emulating an ensemble, maintaining or improving confidence calibration and accuracy while reducing the number of compute operations or parameters by 1.5 to 3.7 times. For users who already trained an NN ensemble, we propose knowledge distillation to transfer the ensemble's predictive distribution to a single NN, marginally improving confidence calibration and accuracy, while halving the number of parameters or compute operations. We proposed uniform quantisation for BNNs, and benchmarked its impact on confidence calibration of pointwise NNs and BNNs, showing that e.g. 8-bit quantisation does not harm confidence calibration, but it reduces the memory footprint by 4 times in comparison to 32-bit floating-point precision. Lastly, we proposed an optimisation framework and a Dropout block to enable BNNs on existing field-programmable gate array-based accelerators, improving their inference latency or energy efficiency 2 to 100 times and algorithmic performance across tasks. This thesis presents methods to reduce NNs' computational costs while maintaining or improving their algorithmic performance, making confidence-calibrated NNs practical in real-world applications.

Type: Thesis (Doctoral)
Qualification: Ph.D
Title: Making Neural Networks Confidence-Calibrated and Practical
Open access status: An open access version is available from UCL Discovery
Language: English
Additional information: Copyright © The Author 2024. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Electronic and Electrical Eng
URI: https://discovery.ucl.ac.uk/id/eprint/10194306
Downloads since deposit
30Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item