Wang, Z;
Li, X;
Sun, L;
Zhang, H;
Liu, H;
Wang, J;
(2024)
Learning State-Specific Action Masks for Reinforcement Learning.
Algorithms
, 17
(2)
, Article 60. 10.3390/a17020060.
Preview |
PDF
algorithms-17-00060.pdf - Published Version Download (3MB) | Preview |
Abstract
Efficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental action masks to reduce the action possibilities. Nevertheless, these methods often lack interpretability or rely on expert knowledge. In this study, we introduce a novel method for automatically reducing the action space in environments with discrete action spaces while preserving interpretability. The proposed approach learns state-specific masks with a dual purpose: (1) eliminating actions with minimal influence on the MDP and (2) aggregating actions with identical behavioral consequences within the MDP. Specifically, we introduce a novel concept called Bisimulation Metrics on Actions by States (BMAS) to quantify the behavioral consequences of actions within the MDP and design a dedicated mask model to ensure their binary nature. Crucially, we present a practical learning procedure for training the mask model, leveraging transition data collected by any RL policy. Our method is designed to be plug-and-play and adaptable to all RL policies, and to validate its effectiveness, an integration into two prominent RL algorithms, DQN and PPO, is performed. Experimental results obtained from Maze, Atari, and (Formula presented.) RTS2 reveal a substantial acceleration in the RL learning process and noteworthy performance improvements facilitated by the introduced approach.
Type: | Article |
---|---|
Title: | Learning State-Specific Action Masks for Reinforcement Learning |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.3390/a17020060 |
Publisher version: | http://dx.doi.org/10.3390/a17020060 |
Language: | English |
Additional information: | © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
Keywords: | reinforcement learning; exploration efficiency; space reduction |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery.ucl.ac.uk/id/eprint/10188413 |
Archive Staff Only
View Item |