UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Benchmarking LLMs via Uncertainty Quantification

Ye, F; Yang, M; Pang, J; Wang, L; Wong, DF; Yilmaz, E; Shi, S; (2024) Benchmarking LLMs via Uncertainty Quantification. In: Processing of the 38th Conference on Neural Information Processing Systems (NeurIPS 2024) Track on Datasets and Benchmarks. NeurIPS: Vancouver, BC, Canada. Green open access

[thumbnail of 1565_Benchmarking_LLMs_via_Unc.pdf]
Preview
Text
1565_Benchmarking_LLMs_via_Unc.pdf - Published Version

Download (1MB) | Preview

Abstract

The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods. However, current evaluation platforms, such as the widely recognized HuggingFace open LLM leaderboard, neglect a crucial aspect - uncertainty, which is vital for thoroughly assessing LLMs. To bridge this gap, we introduce a new benchmarking approach for LLMs that integrates uncertainty quantification. Our examination involves nine LLMs (LLM series) spanning five representative natural language processing tasks. Our findings reveal that: I) LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs may display greater uncertainty compared to their smaller counterparts; and III) Instruction-finetuning tends to increase the uncertainty of LLMs. These results underscore the significance of incorporating uncertainty into the evaluation of LLMs. Our implementation is available at https://github.com/smartyfh/LLM-Uncertainty-Bench.

Type: Proceedings paper
Title: Benchmarking LLMs via Uncertainty Quantification
Event: 38th Conference on Neural Information Processing Systems (NeurIPS 2024) Track on Datasets and Benchmarks.
Open access status: An open access version is available from UCL Discovery
DOI: 10.52202/079017-0491
Publisher version: https://doi.org/10.52202/079017-0491
Language: English
Additional information: This version is the version of record. For information on re-use, please refer to the publisher’s terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10217368
Downloads since deposit
0Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item