Dramsch, Jesper Sören;
Kuglitsch, Monique M;
Fernández-Torres, Miguel-Ángel;
Toreti, Andrea;
Albayrak, Rustem Arif;
Nava, Lorenzo;
Ghaffarian, Saman;
... Hrast Essenfelder, Arthur; + view all
(2025)
Explainability can foster trust in artificial intelligence in geoscience.
Nature Geoscience
, 18
(2)
pp. 112-114.
10.1038/s41561-025-01639-x.
![]() |
Text
44210_2_merged_1735900924.pdf - Accepted Version Access restricted to UCL open access staff until 6 August 2025. Download (1MB) |
Abstract
Uptake of explainable artificial intelligence (XAI) methods in geoscience is currently limited. We argue that such methods that reveal the decision processes of AI models can foster trust in their results and facilitate the broader adoption of AI.
Type: | Article |
---|---|
Title: | Explainability can foster trust in artificial intelligence in geoscience |
DOI: | 10.1038/s41561-025-01639-x |
Publisher version: | https://doi.org/10.1038/s41561-025-01639-x |
Language: | English |
Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences |
URI: | https://discovery.ucl.ac.uk/id/eprint/10205986 |




Archive Staff Only
![]() |
View Item |