UCL logo

UCL Discovery

UCL home » Library Services » Electronic resources » UCL Discovery

Using expectation-maximization for reinforcement learning

Dayan, P; Hinton, GE; (1997) Using expectation-maximization for reinforcement learning. NEURAL COMPUT , 9 (2) 271 - 278.

Full text not available from this repository.

Abstract

We discuss Hinton's (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977).

Type:Article
Title:Using expectation-maximization for reinforcement learning
Keywords:CONNECTIONIST
UCL classification:UCL > School of Life and Medical Sciences > Faculty of Life Sciences > Gatsby Computational Neuroscience Unit

Archive Staff Only: edit this record