UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

AlphaZero-Like Tree-Search can Guide Large Language Model Decoding and Training

Wan, Z; Feng, X; Wen, M; McAleer, SM; Wen, Y; Zhang, W; Wang, J; (2024) AlphaZero-Like Tree-Search can Guide Large Language Model Decoding and Training. In: NeurIPS 2023 Foundation Models for Decision Making Workshop. (pp. pp. 49890-49920). NeuriPS Green open access

[thumbnail of Wang_27_Alphazero_like_Tree_Search_.pdf]
Preview
Text
Wang_27_Alphazero_like_Tree_Search_.pdf

Download (633kB) | Preview

Abstract

Recent works like Tree-of-Thought (ToT) and Reasoning via Planning (RAP) aim to augment the multi-step reasoning capabilities of LLMs by using tree-search algorithms. These methods rely on prompting a pre-trained model to serve as a value function and focus on problems with low search depth. As a result, these methods cannot benefit from in-domain training and only rely on pretraining process - they will not work in domains where the pre-trained LLM does not have enough knowledge to serve as an effective value function or in domains that require long-horizon planning. To address these limitations, we present an AlphaZero-like tree-search learning framework for LLMs (termed TS-LLM), systematically illustrating how tree-search with a learned value function can guide LLM decoding. TS-LLM distinguishes itself in two key ways. (1) Leveraging a learned value function and AlphaZero-like algorithms, our approach can be generally adaptable to a wide range of tasks, language models of any size, and tasks of varying search depths. (2) Our approach can guide LLMs during both inference and training, iteratively improving the LLMs. Empirical results across reasoning, planning, alignment, and decision-making tasks show that TS-LLM outperforms existing approaches and can handle trees with a depth of 64.

Type: Proceedings paper
Title: AlphaZero-Like Tree-Search can Guide Large Language Model Decoding and Training
Open access status: An open access version is available from UCL Discovery
Publisher version: https://openreview.net/forum?id=PJfc4x2jXY
Language: English
Additional information: This version is the version of record. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Large Language Models, Tree Search, Value Function, Reinforcement Learning
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10206803
Downloads since deposit
6Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item