Yu, Hengjie;
Wang, Yizhi;
Cheng, Tao;
Yan, Yan;
Dawson, Kenneth A;
Li, Sam FY;
Zheng, Yefeng;
(2025)
Empowering scientific discovery with explainable small domain-specific and large language models.
Artificial Intelligence Review
, 58
(12)
, Article 371. 10.1007/s10462-025-11365-w.
Preview |
Text
XAI 2025 s10462-025-11365-w.pdf - Published Version Download (7MB) | Preview |
Abstract
As artificial intelligence (AI) increasingly integrates into scientific research, explainability has become a cornerstone for ensuring reliability and innovation in discovery processes. This review offers a forward-looking integration of explainable AI (XAI)-based research paradigms, encompassing small domain-specific models, large language models (LLMs), and agent-based large-small model collaboration. For domain-specific models, we introduce a knowledge-oriented taxonomy categorizing methods into knowledge-agnostic, knowledge-based, knowledge-infused, and knowledge-verified approaches, emphasizing the balance between domain knowledge and innovative insights. For LLMs, we examine three strategies for integrating domain knowledge—prompt engineering, retrieval-augmented generation, and supervised fine-tuning—along with advances in explainability, including local, global, and conversation-based explanations. We also envision future agent-based model collaborations within automated laboratories, stressing the need for context-aware explanations tailored to research goals. Additionally, we discuss the unique characteristics and limitations of both explainable small domain-specific models and LLMs in the realm of scientific discovery. Finally, we highlight methodological challenges, potential pitfalls, and the necessity of rigorous validation to ensure XAI’s transformative role in accelerating scientific discovery and reshaping research paradigms.
| Type: | Article |
|---|---|
| Title: | Empowering scientific discovery with explainable small domain-specific and large language models |
| Open access status: | An open access version is available from UCL Discovery |
| DOI: | 10.1007/s10462-025-11365-w |
| Publisher version: | https://doi.org/10.1007/s10462-025-11365-w |
| Language: | English |
| Additional information: | Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. |
| Keywords: | Science & Technology, Technology, Computer Science, Artificial Intelligence, Computer Science, AI for science, Explainable AI, Scientific discovery, Domain knowledge, Research paradigm, ARTIFICIAL-INTELLIGENCE, AI, DECISIONS, MEDICINE |
| UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Civil, Environ and Geomatic Eng |
| URI: | https://discovery.ucl.ac.uk/id/eprint/10218539 |
Archive Staff Only
![]() |
View Item |

