UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Predictive representations can link model-based reinforcement learning to model-free mechanisms

Russek, EM; Momennejad, I; Botvinick, MM; Gershman, SJ; Daw, ND; (2017) Predictive representations can link model-based reinforcement learning to model-free mechanisms. PLoS Computer Biology , 13 (9) , Article e1005768. 10.1371/journal.pcbi.1005768. Green open access

[thumbnail of journal.pcbi.1005768.pdf]
Preview
Text
journal.pcbi.1005768.pdf - Published Version

Download (4MB) | Preview

Abstract

Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.

Type: Article
Title: Predictive representations can link model-based reinforcement learning to model-free mechanisms
Location: United States
Open access status: An open access version is available from UCL Discovery
DOI: 10.1371/journal.pcbi.1005768
Publisher version: https://doi.org/10.1371/journal.pcbi.1005768
Language: English
Additional information: This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
Keywords: Algorithms, Animals, Computational Biology, Computer Simulation, Decision Making, Humans, Models, Neurological, Reinforcement (Psychology), Time Factors
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology > Imaging Neuroscience
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences > Gatsby Computational Neurosci Unit
URI: https://discovery.ucl.ac.uk/id/eprint/10070658
Downloads since deposit
69Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item