UCL logo

UCL Discovery

UCL home » Library Services » Electronic resources » UCL Discovery

Reinforcement learning via AIXI approximation

Veness, J; Siong Ng, K; Hutter, M; Silver, D; (2010) Reinforcement learning via AIXI approximation. In: Proceedings of the National Conference on Artificial Intelligence. (pp. 605 - 611).

Full text not available from this repository.

Abstract

This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a Monte Carlo Tree Search algorithm along with an agent-specific extension of the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a number of stochastic, unknown, and partially observable domains. Copyright © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Type:Proceedings paper
Title:Reinforcement learning via AIXI approximation
UCL classification:UCL > School of BEAMS > Faculty of Engineering Science > Computer Science

Archive Staff Only: edit this record