UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Retrospective model-based inference guides model-free credit assignment

Moran, R; Keramati, M; Dayan, P; Dolan, RJ; (2019) Retrospective model-based inference guides model-free credit assignment. Nature Communications , 10 , Article 750. 10.1038/s41467-019-08662-8. Green open access

[thumbnail of Dolan_Retrospective model-based inference guides model-free credit assignment_VoR.pdf]
Preview
Text
Dolan_Retrospective model-based inference guides model-free credit assignment_VoR.pdf - Published Version

Download (1MB) | Preview

Abstract

An extensive reinforcement learning literature shows that organisms assign credit efficiently, even under conditions of state uncertainty. However, little is known about credit-assignment when state uncertainty is subsequently resolved. Here, we address this problem within the framework of an interaction between model-free (MF) and model-based (MB) control systems. We present and support experimentally a theory of MB retrospective-inference. Within this framework, a MB system resolves uncertainty that prevailed when actions were taken thus guiding an MF credit-assignment. Using a task in which there was initial uncertainty about the lotteries that were chosen, we found that when participants’ momentary uncertainty about which lottery had generated an outcome was resolved by provision of subsequent information, participants preferentially assigned credit within a MF system to the lottery they retrospectively inferred was responsible for this outcome. These findings extend our knowledge about the range of MB functions and the scope of system interactions.

Type: Article
Title: Retrospective model-based inference guides model-free credit assignment
Open access status: An open access version is available from UCL Discovery
DOI: 10.1038/s41467-019-08662-8
Publisher version: https://doi.org/10.1038/s41467-019-08662-8
Language: English
Additional information: Copyright © The Author(s) 2019. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/ by/4.0/.
Keywords: Decision, Human behaviour, Learning algorithms, Operant learning, Reward
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology > Imaging Neuroscience
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences
URI: https://discovery.ucl.ac.uk/id/eprint/10069034
Downloads since deposit
64Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item