Geerts, Jesse P;
Gershman, Samuel J;
Burgess, Neil;
Stachenfeld, Kimberly L;
(2023)
A probabilistic successor representation for context-dependent learning.
Psychological Review
10.1037/rev0000414.
(In press).
Preview |
Text
2023-72252-001.pdf - Published Version Download (1MB) | Preview |
Abstract
Two of the main impediments to learning complex tasks are that relationships between different stimuli, including rewards, can be uncertain and context-dependent. Reinforcement learning (RL) provides a framework for learning, by predicting total future reward directly (model-free RL), or via predictions of future states (model-based RL). Within this framework, "successor representation" (SR) predicts total future occupancy of all states. A recent theoretical proposal suggests that the hippocampus encodes the SR in order to facilitate prediction of future reward. However, this proposal does not take into account how learning should adapt under uncertainty and switches of context. Here, we introduce a theory of learning SRs using prediction errors which includes optimally balancing uncertainty in new observations versus existing knowledge. We then generalize that approach to a multicontext setting, allowing the model to learn and maintain multiple task-specific SRs and infer which one to use at any moment based on the accuracy of its predictions. Thus, the context used for predictions can be determined by both the contents of the states themselves and the distribution of transitions between them. This probabilistic SR model captures animal behavior in tasks which require contextual memory and generalization, and unifies previous SR theory with hippocampal-dependent contextual decision-making. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Type: | Article |
---|---|
Title: | A probabilistic successor representation for context-dependent learning |
Location: | United States |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1037/rev0000414 |
Publisher version: | https://doi.org/10.1037/rev0000414 |
Language: | English |
Additional information: | This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0; http://creativecommons.org/licenses/by/4.0). This license permits copying and redistributing the work in any medium or format, as well as adapting the material for any purpose, even commercially. |
Keywords: | Reinforcement learning, successor representation, uncertainty, context |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology > Clinical and Experimental Epilepsy |
URI: | https://discovery.ucl.ac.uk/id/eprint/10170153 |
Archive Staff Only
View Item |