UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Interpretable reward redistribution in reinforcement learning: a causal approach

Zhang, Y; Du, Y; Huang, B; Wang, Z; Wang, J; Fang, M; Pechenizkiy, M; (2023) Interpretable reward redistribution in reinforcement learning: a causal approach. In: Oh, A and Naumann, T and Globerson, A and Saenko, K and Hardt, M and Levine, S, (eds.) 37th International Conference on Neural Information Processing Systems. (pp. pp. 20208-20229). Curran Associates Inc.: Red Hook, NY, USA. Green open access

[thumbnail of NeurIPS-2023-interpretable-reward-redistribution-in-reinforcement-learning-a-causal-approach-Paper-Conference.pdf]
Preview
Text
NeurIPS-2023-interpretable-reward-redistribution-in-reinforcement-learning-a-causal-approach-Paper-Conference.pdf - Published Version

Download (5MB) | Preview

Abstract

A major challenge in reinforcement learning is to determine which state-action pairs are responsible for future rewards that are delayed. Reward redistribution serves as a solution to re-assign credits for each time step from observed sequences. While the majority of current approaches construct the reward redistribution in an uninterpretable manner, we propose to explicitly model the contributions of state and action from a causal perspective, resulting in an interpretable reward redistribution and preserving policy invariance. In this paper, we start by studying the role of causal generative models in reward redistribution by characterizing the generation of Markovian rewards and trajectory-wise long-term return and further propose a framework, called Generative Return Decomposition (GRD), for policy optimization in delayed reward scenarios. Specifically, GRD first identifies the unobservable Markovian rewards and causal relations in the generative process. Then, GRD makes use of the identified causal generative model to form a compact representation to train policy over the most favorable subspace of the state space of the agent. Theoretically, we show that the unobservable Markovian reward function is identifiable, as well as the underlying causal structure and causal models. Experimental results show that our method outperforms state-of-the-art methods and the provided visualization further demonstrates the interpretability of our method. The project page is located at https://reedzyd.github.io/GenerativeReturnDecomposition/.

Type: Proceedings paper
Title: Interpretable reward redistribution in reinforcement learning: a causal approach
Event: NeurIPS 2023
ISBN-13: 9781713899921
Open access status: An open access version is available from UCL Discovery
DOI: 10.5555/3666122.3667009
Publisher version: https://dl.acm.org/doi/10.5555/3666122.3667009
Language: English
Additional information: This version is the version of record. For information on re-use, please refer to the publisher’s terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10192034
Downloads since deposit
6Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item