UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Reinforcing LLM Agents via Policy Optimization with Action Decomposition

Wen, M; Wan, Z; Wang, J; Zhang, W; Wen, Y; (2024) Reinforcing LLM Agents via Policy Optimization with Action Decomposition. In: Advances in Neural Information Processing Systems 37 (NeurIPS 2024). NeurIPS Proceedings Green open access

[thumbnail of 17978_Reinforcing_LLM_Agents_v.pdf]
Preview
PDF
17978_Reinforcing_LLM_Agents_v.pdf - Accepted Version

Download (4MB) | Preview

Abstract

Language models as intelligent agents push the boundaries of sequential decision-making agents but struggle with limited knowledge of environmental dynamics and exponentially huge action space. Recent efforts like GLAM and TWOSOME manually constrain the action space to a restricted subset and employ reinforcement learning to align agents' knowledge with specific environments. However, they overlook fine-grained credit assignments for intra-action tokens, which is essential for efficient language agent optimization, and rely on human's prior knowledge to restrict action space. This paper proposes decomposing language agent optimization from the action level to the token level, offering finer supervision for each intra-action token and manageable optimization complexity in environments with unrestricted action spaces. Beginning with the simplification of flattening all actions, we theoretically explore the discrepancies between action-level optimization and this naive token-level optimization. We then derive the Bellman backup with Action Decomposition (BAD) to integrate credit assignments for both intra-action and inter-action tokens, effectively eliminating the discrepancies. Implementing BAD within the PPO algorithm, we introduce Policy Optimization with Action Decomposition (POAD). POAD benefits from a finer-grained credit assignment process and lower optimization complexity, leading to enhanced learning efficiency and generalization abilities in aligning language agents with interactive environments. We validate POAD across diverse testbeds, with results affirming the advantages of our approach and the correctness of our theoretical analysis.

Type: Proceedings paper
Title: Reinforcing LLM Agents via Policy Optimization with Action Decomposition
Event: 38th Conference on Neural Information Processing Systems (NeurIPS 2024)
Open access status: An open access version is available from UCL Discovery
Publisher version: https://proceedings.neurips.cc/paper_files/paper/2...
Language: English
Additional information: This version is the version of record. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Reinforcement Learning, Language Agent, LLM agent, Large Language Models
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10212514
Downloads since deposit
3Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item