Paglieri, D;
Cupiał, B;
Coward, S;
Piterbarg, U;
Wolczyk, M;
Khan, A;
Pignatelli, E;
... Rocktäschel, T; + view all
(2025)
BALROG: Benchmarking Agentic LLM and VLM Reasoning On Games.
In:
13th International Conference on Learning Representations ICLR 2025.
(pp. pp. 36061-36097).
ICLR
Preview |
Text
1630_BALROG_Benchmarking_Agent.pdf - Accepted Version Download (4MB) | Preview |
Abstract
Large Language Models (LLMs) and Vision Language Models (VLMs) possess extensive knowledge and exhibit promising reasoning abilities, however, they still struggle to perform well in complex, dynamic environments. Real-world tasks require handling intricate interactions, advanced spatial reasoning, long-term planning, and continuous exploration of new strategies-areas in which we lack effective methodologies for comprehensively evaluating these capabilities. To address this gap, we introduce BALROG, a novel benchmark designed to assess the agentic capabilities of LLMs and VLMs through a diverse set of challenging games. Our benchmark incorporates a range of existing reinforcement learning environments with varying levels of difficulty, including tasks that are solvable by non-expert humans in seconds to extremely challenging ones that may take years to master (e.g., the NetHack Learning Environment). We devise fine-grained metrics to measure performance and conduct an extensive evaluation of several popular open-source and closed-source LLMs and VLMs. Our findings indicate that while current models achieve partial success in the easier games, they struggle significantly with more challenging tasks. Notably, we observe severe deficiencies in vision-based decision-making, as several models perform worse when visual representations of the environments are provided. We release BALROG as an open and user-friendly benchmark to facilitate future research and development in the agentic community. Code and Leaderboard at balrogai.com.
| Type: | Proceedings paper |
|---|---|
| Title: | BALROG: Benchmarking Agentic LLM and VLM Reasoning On Games |
| Event: | ICLR 2025 |
| Open access status: | An open access version is available from UCL Discovery |
| Publisher version: | https://openreview.net/forum?id=fp6t3F669F |
| Language: | English |
| Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
| Keywords: | LLM, VLM, Agents, Benchmark, RL, Reasoning, Games |
| UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
| URI: | https://discovery.ucl.ac.uk/id/eprint/10216728 |
Archive Staff Only
![]() |
View Item |

