UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

Alqaraawi, A; Schuessler, M; Weiß, P; Costanza, E; Berthouze, N; (2020) Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study. In: IUI '20: Proceedings of the 25th International Conference on Intelligent User Interfaces. (pp. pp. 275-285). ACM: Cagliari, Italy. Green open access

[thumbnail of IUI2020_Saliency_maps_Camera_ready.pdf]
Preview
Text
IUI2020_Saliency_maps_Camera_ready.pdf - Accepted Version

Download (3MB) | Preview

Abstract

Convolutional neural networks (CNNs) offer great machine learning performance over a range of applications, but their operation is hard to interpret, even for experts. Various explanation algorithms have been proposed to address this issue, yet limited research effort has been reported concerning their user evaluation. In this paper, we report on an online between-group user study designed to evaluate the performance of “saliency maps” - a popular explanation algorithm for image classification applications of CNNs. Our results indicate that saliency maps produced by the LRP algorithm helped participants to learn about some specific image features the system is sensitive to. However, the maps seem to provide very limited help for participants to anticipate the network’s output for new images. Drawing on our findings, we highlight implications for design and further research on explainable AI. In particular, we argue the HCI and AI communities should look beyond instance-level explanations.

Type: Proceedings paper
Title: Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Event: ACM IUI 2020
Location: Cagliari, Italy
Dates: 17 March 2020 - 20 March 2020
Open access status: An open access version is available from UCL Discovery
DOI: 10.1145/3377325.3377519
Publisher version: https://doi.org/10.1145/3377325.3377519
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: explainable AI; Saliency-maps; heatmap; Human-AI interaction; user studies
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > UCL Interaction Centre
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10091974
Downloads since deposit
399Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item