Zeng, Y;
Liu, G;
Ma, W;
Yang, N;
Zhang, H;
Wang, J;
(2024)
Token-level Direct Preference Optimization.
In:
Proceedings of Machine Learning Research.
(pp. pp. 58348-58365).
PMLR
Preview |
Text
Wang_Token-level Direct Preference Optimization_VoR.pdf Download (715kB) | Preview |
Abstract
Fine-tuning pre-trained Large Language Models (LLMs) is essential to align them with human values and intentions.This process often utilizes methods like pairwise comparisons and KL divergence against a reference LLM, focusing on the evaluation of full answers generated by the models.However, the generation of these responses occurs in a token level, following a sequential, auto-regressive fashion.In this paper, we introduce Token-level Direct Preference Optimization (TDPO), a novel approach to align LLMs with human preferences by optimizing policy at the token level.Unlike previous methods, which face challenges in divergence efficiency, TDPO incorporates forward KL divergence constraints for each token, improving alignment and diversity.Utilizing the Bradley-Terry model for a token-based reward system, TDPO enhances the regulation of KL divergence, while preserving simplicity without the need for explicit reward modeling.Experimental results across various text tasks demonstrate TDPO's superior performance in balancing alignment with generation diversity.Notably, fine-tuning with TDPO strikes a better balance than DPO in the controlled sentiment generation and single-turn dialogue datasets, and significantly improves the quality of generated responses compared to both DPO and PPO-based RLHF methods.Our code is open-sourced at https://github.com/Vance0124/Tokenlevel-Direct-Preference-Optimization.
Type: | Proceedings paper |
---|---|
Title: | Token-level Direct Preference Optimization |
Event: | 41st International Conference on Machine Learning |
Open access status: | An open access version is available from UCL Discovery |
Publisher version: | https://proceedings.mlr.press/v235/zeng24c.html |
Language: | English |
Additional information: | Copyright 2024 by the author(s). Original content in this paper is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0). |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery.ucl.ac.uk/id/eprint/10206797 |
Archive Staff Only
![]() |
View Item |