TY - JOUR N1 - This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions. IS - 2 EP - 114 AV - restricted VL - 18 Y1 - 2025/02// SP - 112 TI - Explainability can foster trust in artificial intelligence in geoscience A1 - Dramsch, Jesper Sören A1 - Kuglitsch, Monique M A1 - Fernández-Torres, Miguel-Ángel A1 - Toreti, Andrea A1 - Albayrak, Rustem Arif A1 - Nava, Lorenzo A1 - Ghaffarian, Saman A1 - Cheng, Ximeng A1 - Ma, Jackie A1 - Samek, Wojciech A1 - Venguswamy, Rudy A1 - Koul, Anirudh A1 - Muthuregunathan, Raghavan A1 - Hrast Essenfelder, Arthur JF - Nature Geoscience UR - https://doi.org/10.1038/s41561-025-01639-x PB - Springer Science and Business Media LLC ID - discovery10205986 N2 - Uptake of explainable artificial intelligence (XAI) methods in geoscience is currently limited. We argue that such methods that reveal the decision processes of AI models can foster trust in their results and facilitate the broader adoption of AI. ER -