UCL logo

UCL Discovery

UCL home » Library Services » Electronic resources » UCL Discovery

Using expectation-maximization for reinforcement learning

Dayan, P; Hinton, GE; (1997) Using expectation-maximization for reinforcement learning. NEURAL COMPUT , 9 (2) 271 - 278.

Full text not available from this repository.


We discuss Hinton's (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977).

Type: Article
Title: Using expectation-maximization for reinforcement learning
URI: http://discovery.ucl.ac.uk/id/eprint/170891
Downloads since deposit
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item