UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Memory by accident: a theory of learning as a byproduct of network stabilization

Confavreux, Basile; Dorrell, William; Patel, Nishil; Saxe, Andrew; (2025) Memory by accident: a theory of learning as a byproduct of network stabilization. In: Proceedings of the 39th Conference on Neural Information Processing Systems (NeurIPS 2025). (pp. pp. 1-28). NeurIPS (In press). Green open access

[thumbnail of Confavreux_Memory by accident. A theory of learning as a byproduct of network stabilization_AOP.pdf]
Preview
Text
Confavreux_Memory by accident. A theory of learning as a byproduct of network stabilization_AOP.pdf

Download (3MB) | Preview

Abstract

Synaptic plasticity is widely considered to be crucial to the brain’s ability to learn throughout life. Decades of theoretical work have therefore been invested in deriving and designing biologically plausible learning rules capable of granting various memory abilities to neural networks. Most of these theoretical approaches optimize directly for a desired memory function; but this procedure can lead to complex, finely-tuned rules, rendering them brittle to perturbations and difficult to implement in practice. Instead, we build on recent work that automatically discovers large numbers of candidate plasticity rules operating in recurrent spiking neural networks. Surprisingly, despite the fact that these rules are selected solely to achieve network stabilization, we observe across a range of network models -feedforward, recurrent; rate and spiking- that almost all these rules endow the network with simple forms of memory such as familiarity detection - seemingly by accident. To understand this phenomenon, we study an analytic toy model. We observe that memory arises from the degeneracy of weight matrices that stabilize a network: where the network lands in this space of stable weights depends on its past inputs---that is, memory. Even simple Hebbian plasticity rules can utilize this degeneracy, creating a zoo of memory abilities with various lifetimes. In practice, the larger the network and the more co-active plasticity rules in the system, the stronger the memory-by-accident phenomenon becomes. Overall, our findings suggest that activity-silent memory is a near-unavoidable consequence of stabilization. Simple forms of memory, such as familiarity or novelty detection, appear to be widely available resources for plastic brain networks, suggesting that they could form the raw materials that were later sculpted into higher-order cognitive abilities.

Type: Proceedings paper
Title: Memory by accident: a theory of learning as a byproduct of network stabilization
Event: 39th Conference on Neural Information Processing Systems (NeurIPS 2025)
Open access status: An open access version is available from UCL Discovery
Publisher version: https://openreview.net/forum?id=5TWdcO9h4O
Language: English
Additional information: © The Authors 2025. Original content in this paper is licensed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) Licence (https://creativecommons.org/licenses/by/4.0/deed.en).
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences > Gatsby Computational Neurosci Unit
URI: https://discovery.ucl.ac.uk/id/eprint/10217087
Downloads since deposit
25Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item