Abdulaal, A;
              
      
            
                Jin, C;
              
      
            
                Montaña-Brown, N;
              
      
            
                Gema, AP;
              
      
            
                de Castro, DC;
              
      
            
                Alexander, DC;
              
      
            
                Teare, P;
              
      
            
            
          
      
            
            
          
      
            
            
            ... Saseendran, A; + view all
            
          
      
        
        
        
    
  
(2025)
  Balancing Act: Diversity and Consistency in Large Language Model Ensembles.
    
    
      In: 
      13th International Conference on Learning Representations ICLR 2025.
      
      (pp. pp. 29287-29319).
    
 
   (In press).
  
| ![[thumbnail of 8091_Balancing_Act_Diversity_a.pdf]](https://discovery.ucl.ac.uk/style/images/fileicons/text.png) | Text 8091_Balancing_Act_Diversity_a.pdf - Accepted Version Access restricted to UCL open access staff until 25 January 2026. Download (624kB) | 
Abstract
Ensembling strategies for Large Language Models (LLMs) have demonstrated significant potential in improving performance across various tasks by combining the strengths of individual models. However, identifying the most effective ensembling method remains an open challenge, as neither maximizing output consistency through self-consistency decoding nor enhancing model diversity via frameworks like 'Mixture of Agents' has proven universally optimal. Motivated by this, we propose a unified framework to examine the trade-offs between task performance, model diversity, and output consistency in ensembles. More specifically, we introduce a consistency score that defines a gating mechanism for mixtures of agents and an algorithm for mixture refinement to investigate these trade-offs at the semantic and model levels, respectively. We incorporate our insights into a novel inference-time LLM ensembling strategy called the Dynamic Mixture of Agents (DMoA) and demonstrate that it achieves a new state-of-the-art result in the challenging Big Bench Hard mixed evaluations benchmark. Our analysis reveals that cross-validation bias can enhance performance, contingent on the expertise of the constituent models. We further demonstrate that distinct reasoning tasks-such as arithmetic reasoning, commonsense reasoning, and instruction following-require different model capabilities, leading to inherent task-dependent trade-offs that DMoA can balance effectively.
| Type: | Proceedings paper | 
|---|---|
| Title: | Balancing Act: Diversity and Consistency in Large Language Model Ensembles | 
| Event: | ICLR 2025 | 
| Publisher version: | https://iclr.cc/FAQ/Proceedings | 
| Language: | English | 
| Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. | 
| UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science | 
| URI: | https://discovery.ucl.ac.uk/id/eprint/10211652 | 
Archive Staff Only
|  | View Item | 
 
                      
