Fan, Hongxiang;
Ferianc, Martin;
Que, Zhiqiang;
Niu, Xinyu;
Rodrigues, Miguel;
Luk, Wayne;
(2022)
Accelerating Bayesian Neural Networks via Algorithmic and Hardware Optimizations.
IEEE Transactions on Parallel and Distributed Systems
, 33
(12)
pp. 3387-3399.
10.1109/tpds.2022.3153682.
(In press).
Preview |
Text
tpds21_bayescnn_hf5_final.pdf - Accepted Version Download (1MB) | Preview |
Abstract
Bayesian neural networks (BayesNNs) have demonstrated their advantages in various safety-critical applications, such as autonomous driving or healthcare, due to their ability to capture and represent model uncertainty. However, standard BayesNNs require to be repeatedly run because of Monte Carlo sampling to quantify their uncertainty, which puts a burden on their real-world hardware performance. To address this performance issue, this article systematically exploits the extensive structured sparsity and redundant computation in BayesNNs. Different from the unstructured or structured sparsity in standard convolutional NNs, the structured sparsity of BayesNNs is introduced by Monte Carlo Dropout and its associated sampling required during uncertainty estimation and prediction, which can be exploited through both algorithmic and hardware optimizations. We first classify the observed sparsity patterns into three categories: channel sparsity, layer sparsity and sample sparsity. On the algorithmic side, a framework is proposed to automatically explore these three sparsity categories without sacrificing algorithmic performance. We demonstrated that structured sparsity can be exploited to accelerate CPU designs by up to 49 times, and GPU designs by up to 40 times. On the hardware side, a novel hardware architecture is proposed to accelerate BayesNNs, which achieves a high hardware performance using the runtime adaptable hardware engines and the intelligent skipping support. Upon implementing the proposed hardware design on an FPGA, our experiments demonstrated that the algorithm-optimized BayesNNs can achieve up to 56 times speedup when compared with unoptimized Bayesian nets. Comparing with the optimized GPU implementation, our FPGA design achieved up to 7.6 times speedup and up to 39.3 times higher energy efficiency.
Type: | Article |
---|---|
Title: | Accelerating Bayesian Neural Networks via Algorithmic and Hardware Optimizations |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1109/tpds.2022.3153682 |
Publisher version: | http://dx.doi.org/10.1109/tpds.2022.3153682 |
Language: | English |
Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
Keywords: | Hardware, Artificial neural networks, Uncertainty, Bayes methods, Standards, Estimation, Prediction algorithms |
UCL classification: | UCL > Provost and Vice Provost Offices > UCL SLASH > Faculty of Arts and Humanities UCL > Provost and Vice Provost Offices > UCL SLASH > Faculty of Arts and Humanities > Dept of Information Studies UCL > Provost and Vice Provost Offices > UCL SLASH UCL |
URI: | https://discovery.ucl.ac.uk/id/eprint/10150056 |
Archive Staff Only
View Item |