TY  - GEN
N2  - Training reinforcement learning (RL) agents to achieve desired goals while also acting morally is a challenging problem. Transformer-based language models (LMs) have shown some promise in moral awareness, but their use in different contexts is problematic because of the complexity and implicitness of human morality. In this paper, we build on text-based games, which are challenging environments for current RL agents, and propose the HuMAL (Human-guided Morality Awareness Learning) algorithm, which adaptively learns personal values through human-agent collaboration with minimal manual feedback. We evaluate HuMAL on the Jiminy Cricket benchmark, a set of text-based games with various scenes and dense morality annotations, using both simulated and actual human feedback. The experimental results demonstrate that with a small amount of human feedback, HuMAL can improve task performance and reduce immoral behavior in a variety of games, and is adaptable to different personal values.
UR  - http://dx.doi.org/10.1609/aaai.v38i19.30155
EP  - 21582
ID  - discovery10194860
TI  - Human-Guided Moral Decision Making in Text-Based Games
SN  - 2159-5399
Y1  - 2024/03/25/
AV  - public
A1  - Shi, Z
A1  - Fang, M
A1  - Chen, L
A1  - Du, Y
A1  - Wang, J
PB  - Association for the Advancement of Artificial Intelligence (AAAI)
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher?s terms and conditions.
SP  - 21574
ER  -