UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Improving explainability of deep neural network-based electrocardiogram interpretation using variational auto-encoders

van de Leur, Rutger R; Bos, Max N; Taha, Karim; Sammani, Arjan; Yeung, Ming Wai; van Duijvenboden, Stefan; Lambiase, Pier D; ... van Es, René; + view all (2022) Improving explainability of deep neural network-based electrocardiogram interpretation using variational auto-encoders. European Heart Journal – Digital Health , 3 (3) pp. 390-404. 10.1093/ehjdh/ztac038. Green open access

[thumbnail of ztac038.pdf]
Preview
PDF
ztac038.pdf - Published Version

Download (2MB) | Preview

Abstract

AIMS: Deep neural networks (DNNs) perform excellently in interpreting electrocardiograms (ECGs), both for conventional ECG interpretation and for novel applications such as detection of reduced ejection fraction (EF). Despite these promising developments, implementation is hampered by the lack of trustworthy techniques to explain the algorithms to clinicians. Especially, currently employed heatmap-based methods have shown to be inaccurate. METHODS AND RESULTS: We present a novel pipeline consisting of a variational auto-encoder (VAE) to learn the underlying factors of variation of the median beat ECG morphology (the FactorECG), which are subsequently used in common and interpretable prediction models. As the ECG factors can be made explainable by generating and visualizing ECGs on both the model and individual level, the pipeline provides improved explainability over heatmap-based methods. By training on a database with 1.1 million ECGs, the VAE can compress the ECG into 21 generative ECG factors, most of which are associated with physiologically valid underlying processes. Performance of the explainable pipeline was similar to 'black box' DNNs in conventional ECG interpretation [area under the receiver operating curve (AUROC) 0.94 vs. 0.96], detection of reduced EF (AUROC 0.90 vs. 0.91), and prediction of 1-year mortality (AUROC 0.76 vs. 0.75). Contrary to the 'black box' DNNs, our pipeline provided explainability on which morphological ECG changes were important for prediction. Results were confirmed in a population-based external validation dataset. CONCLUSIONS: Future studies on DNNs for ECGs should employ pipelines that are explainable to facilitate clinical implementation by gaining confidence in artificial intelligence and making it possible to identify biased models.

Type: Article
Title: Improving explainability of deep neural network-based electrocardiogram interpretation using variational auto-encoders
Location: England
Open access status: An open access version is available from UCL Discovery
DOI: 10.1093/ehjdh/ztac038
Publisher version: https://doi.org/10.1093/ehjdh/ztac038
Language: English
Additional information: © The Author(s) 2022. Published by Oxford University Press on behalf of the European Society of Cardiology. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com
Keywords: Artificial intelligence, Deep learning, Deep neural network, Electrocardiogram, Explainable, Interpretable
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Population Health Sciences > Institute of Cardiovascular Science
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Population Health Sciences > Institute of Cardiovascular Science > Clinical Science
URI: https://discovery.ucl.ac.uk/id/eprint/10166263
Downloads since deposit
42Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item