eprintid: 10119535
rev_number: 14
eprint_status: archive
userid: 608
dir: disk0/10/11/95/35
datestamp: 2021-01-27 14:24:14
lastmod: 2021-12-20 00:53:53
status_changed: 2021-01-27 14:24:14
type: proceedings_section
metadata_visibility: show
creators_name: Alzantot, M
creators_name: Widdicombe, A
creators_name: Julier, S
creators_name: Srivastava, M
title: NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning
ispublished: pub
divisions: UCL
divisions: B04
divisions: C05
divisions: F48
keywords: Computational modeling
,
Predictive models,
Training,
Cost function,
Neural networks,
Computer science,
Image recognition
note: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
abstract: Deep Neural Networks (DNNs) deliver state-of-the-art performance in many image recognition and understanding applications. However, despite their outstanding performance, these models are black-boxes and it is hard to understand how they make their decisions. Over the past few years, researchers have studied the problem of providing explanations of why DNNs predicted their results. However, existing techniques are either obtrusive, requiring changes in model training, or suffer from low output quality. In this paper, we present a novel method, NeuroMask, for generating an interpretable explanation of classification model results. When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model. The mask values are tuned by minimizing a properly designed cost function that preserves the classification result and encourages producing an interpretable mask. Experiments using state-of-art Convolutional Neural Networks for image recognition on different datasets (CIFAR-10 and ImageNet) show that NeuroMask successfully localizes the parts of the input image which are most relevant to the DNN decision. By showing a visual quality comparison between NeuroMask explanations and those of other methods, we find NeuroMask to be both accurate and interpretable.
date: 2019-01-01
date_type: published
publisher: IEEE
official_url: https://doi.org/10.1109/SMARTCOMP.2019.00033
oa_status: green
full_text_type: other
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 1682597
doi: 10.1109/SMARTCOMP.2019.00033
lyricists_name: Julier, Simon
lyricists_id: SJULI23
actors_name: Julier, Simon
actors_id: SJULI23
actors_role: owner
full_text_status: public
publication: 2019 IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING (SMARTCOMP 2019)
place_of_pub: Washington, DC, USA
pagerange: 81-86
pages: 6
event_title: 5th IEEE International Conference on Smart Computing (SMARTCOMP)
event_location: Washington, DC
event_dates: 12 June 2019 - 14 June 2019
institution: 5th IEEE International Conference on Smart Computing (SMARTCOMP)
book_title: 2019 IEEE International Conference on Smart Computing (SMARTCOMP)
citation:        Alzantot, M;    Widdicombe, A;    Julier, S;    Srivastava, M;      (2019)    NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning.                     In:  2019 IEEE International Conference on Smart Computing (SMARTCOMP).  (pp. pp. 81-86).  IEEE: Washington, DC, USA.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10119535/1/1908.04389v1.pdf