UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models

Liang, Mengfei; Arun, Archish; Wu, Zekun; Munoz, Cristian; Lutch, Jonathan; Kazim, Emre; Koshiyama, Adriano; (2025) THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models. In: NeurIPS Workshop on Socially Responsible Language Modelling Research 2024. NeurIPS Green open access

[thumbnail of 2409.11353v3.pdf]
Preview
Text
2409.11353v3.pdf - Accepted Version

Download (760kB) | Preview

Abstract

Hallucination, the generation of factually incorrect content, is a growing challenge in Large Language Models (LLMs). Existing detection and mitigation methods are often isolated and insufficient for domain-specific needs, lacking a standardized pipeline. This paper introduces THaMES (Tool for Hallucination Mitigations and EvaluationS), an integrated framework and library addressing this gap. THaMES offers an end-to-end solution for evaluating and mitigating hallucinations in LLMs, featuring automated test set generation, multifaceted benchmarking, and adaptable mitigation strategies. It automates test set creation from any corpus, ensuring high data quality, diversity, and cost-efficiency through techniques like batch processing, weighted sampling, and counterfactual validation. THaMES assesses a model's ability to detect and reduce hallucinations across various tasks, including text generation and binary classification, applying optimal mitigation strategies like In-Context Learning (ICL), Retrieval Augmented Generation (RAG), and Parameter-Efficient Fine-tuning (PEFT). Evaluations of state-of-the-art LLMs using a knowledge base of academic papers, political news, and Wikipedia reveal that commercial models like GPT-4o benefit more from RAG than ICL, while open-weight models like Llama-3.1-8B-Instruct and Mistral-Nemo gain more from ICL. Additionally, PEFT significantly enhances the performance of Llama-3.1-8B-Instruct in both evaluation tasks.

Type: Proceedings paper
Title: THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models
Event: Socially Responsible Language Modelling Research (SoLaR) 2024 - NeurIPS 2024 Workshop
Open access status: An open access version is available from UCL Discovery
Publisher version: https://openreview.net/forum?id=cKhhPYfHKN
Language: English
Additional information: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10209109
Downloads since deposit
0Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item