Wu, C;
Masoomi, A;
Gretton, A;
Dy, J;
(2022)
Deep Layer-wise Networks Have Closed-Form Weights.
In:
Proceedings of the 25th International Conference on Artificial Intelligence and Statistics.
(pp. pp. 188-225).
Valencia, Spain
Preview |
Text
tzu-wu22a.pdf - Published Version Download (5MB) | Preview |
Abstract
There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network one layer at a time with only a”single forward pass” has been proposed as an alternative to bypass BP; we refer to these networks as”layer-wise” networks. We continue the work on layer-wise networks by answering two outstanding questions. First, do they have a closed-form solution? Second, how do we know when to stop adding more layers? This work proves that the Kernel Mean Embedding is the closed-form weight that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the Neural Indicator Kernel.
Type: | Proceedings paper |
---|---|
Title: | Deep Layer-wise Networks Have Closed-Form Weights |
Event: | 25th International Conference on Artificial Intelligence and Statistics (AISTATS) 2022 |
Open access status: | An open access version is available from UCL Discovery |
Publisher version: | https://proceedings.mlr.press/v151/tzu-wu22a.html |
Language: | English |
Additional information: | Copyright 2022 by the author(s). |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences > Gatsby Computational Neurosci Unit |
URI: | https://discovery.ucl.ac.uk/id/eprint/10173161 |
Archive Staff Only
View Item |