Dayan, P; Daw, ND; (2008) Decision theory, reinforcement learning, and the brain. COGN AFFECT BEHAV NE , 8 (4) 429 - 453. 10.3758/CABN.8.4.429.
Full text not available from this repository.
Decision making is a core competence for animals and humans acting and surviving in environments they only partially comprehend, gaining rewards and punishments for their troubles, Decision-theoretic concepts permeate experiments and computational models in ethology, psychology, and neuroscience. Here, we review a well-known, coherent Bayesian approach to decision making, showing how it unifies issues in Markovian decision problems, signal detection psychophysics, sequential sampling, and optimal exploration and discuss paradigmatic psychological and neural examples of each problem. We discuss computational issues concerning what subjects know about their task and how ambitious they are in seeking optimal solutions; we address algorithmic topics concerning model-based and model-free methods for making choices; and we highlight key aspects of the neural implementation of decision making.
|Title:||Decision theory, reinforcement learning, and the brain|
|Keywords:||PROBABILISTIC POPULATION CODES, TEMPORAL DIFFERENCE MODELS, BAYESIAN INTEGRATION, PREFRONTAL CORTEX, SENSORY STIMULI, DORSAL STRIATUM, NEURAL SYSTEMS, BASAL GANGLIA, VISUAL-MOTION, REWARD|
|UCL classification:||UCL > School of Life and Medical Sciences > Faculty of Life Sciences > Gatsby Computational Neuroscience Unit|
Archive Staff Only: edit this record