UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Explainable artificial intelligence in disaster risk management: Achievements and prospective futures

Ghaffarian, Saman; Taghikhah, Firouzeh Rosa; Maier, Holger R; (2023) Explainable artificial intelligence in disaster risk management: Achievements and prospective futures. International Journal of Disaster Risk Reduction , 98 , Article 104123. 10.1016/j.ijdrr.2023.104123. Green open access

[thumbnail of 1-s2.0-S2212420923006039-main.pdf]
Preview
PDF
1-s2.0-S2212420923006039-main.pdf - Published Version

Download (3MB) | Preview

Abstract

Disasters can have devastating impacts on communities and economies, underscoring the urgent need for effective strategic disaster risk management (DRM). Although Artificial Intelligence (AI) holds the potential to enhance DRM through improved decision-making processes, its inherent complexity and "black box" nature have led to a growing demand for Explainable AI (XAI) techniques. These techniques facilitate the interpretation and understanding of decisions made by AI models, promoting transparency and trust. However, the current state of XAI applications in DRM, their achievements, and the challenges they face remain underexplored. In this systematic literature review, we delve into the burgeoning domain of XAI-DRM, extracting 195 publications from the Scopus and ISI Web of Knowledge databases, and selecting 68 for detailed analysis based on predefined exclusion criteria. Our study addresses pertinent research questions, identifies various hazard and disaster types, risk components, and AI and XAI methods, uncovers the inherent challenges and limitations of these approaches, and provides synthesized insights to enhance their explainability and effectiveness in disaster decision-making. Notably, we observed a significant increase in the use of XAI techniques for DRM in 2022 and 2023, emphasizing the growing need for transparency and interpretability. Through a rigorous methodology, we offer key research directions that can serve as a guide for future studies. Our recommendations highlight the importance of multi-hazard risk analysis, the integration of XAI in early warning systems and digital twins, and the incorporation of causal inference methods to enhance DRM strategy planning and effectiveness. This study serves as a beacon for researchers and practitioners alike, illuminating the intricate interplay between XAI and DRM, and revealing the profound potential of AI solutions in revolutionizing disaster risk management.

Type: Article
Title: Explainable artificial intelligence in disaster risk management: Achievements and prospective futures
Open access status: An open access version is available from UCL Discovery
DOI: 10.1016/j.ijdrr.2023.104123
Publisher version: https://doi.org/10.1016/j.ijdrr.2023.104123
Language: English
Additional information: © 2023 The Authors. Published by Elsevier Ltd. under a Creative Commons license (http://creativecommons.org/licenses/by/4.0/).
Keywords: Resilience building, Interpretable artificial intelligence, Transparency, Hazard and disaster type, Data-driven decision making
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Maths and Physical Sciences > Inst for Risk and Disaster Reduction
URI: https://discovery.ucl.ac.uk/id/eprint/10182264
Downloads since deposit
140Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item