UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily Long Videos of Seizures

Pérez-García, F; Scott, C; Sparks, R; Diehl, B; Ourselin, S; (2021) Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily Long Videos of Seizures. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. (pp. pp. 334-344). Springer: Cham, Switzerland. Green open access

[thumbnail of paper995.pdf]
Preview
Text
paper995.pdf - Accepted Version

Download (1MB) | Preview

Abstract

Detailed analysis of seizure semiology, the symptoms and signs which occur during a seizure, is critical for management of epilepsy patients. Inter-rater reliability using qualitative visual analysis is often poor for semiological features. Therefore, automatic and quantitative analysis of video-recorded seizures is needed for objective assessment. We present GESTURES, a novel architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to learn deep representations of arbitrarily long videos of epileptic seizures. We use a spatiotemporal CNN (STCNN) pre-trained on large human action recognition (HAR) datasets to extract features from short snippets (approx. 0.5 s) sampled from seizure videos. We then train an RNN to learn seizure-level representations from the sequence of features. We curated a dataset of seizure videos from 68 patients and evaluated GESTURES on its ability to classify seizures into focal onset seizures (FOSs) (N = 106) vs. focal to bilateral tonic-clonic seizures (TCSs) (N = 77), obtaining an accuracy of 98.9% using bidirectional long short-term memory (BLSTM) units. We demonstrate that an STCNN trained on a HAR dataset can be used in combination with an RNN to accurately represent arbitrarily long videos of seizures. GESTURES can provide accurate seizure classification by modeling sequences of semiologies.

Type: Proceedings paper
Title: Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily Long Videos of Seizures
Event: 24th International Conference on Medical Image Computing and Computer Assisted Intervention
Open access status: An open access version is available from UCL Discovery
DOI: 10.1007/978-3-030-87240-3_32
Publisher version: https://doi.org/10.1007/978-3-030-87240-3_32
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions.
Keywords: Epilepsy video-telemetry, Temporal segment networks, Transfer learning
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology > Clinical and Experimental Epilepsy
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Med Phys and Biomedical Eng
URI: https://discovery.ucl.ac.uk/id/eprint/10132500
Downloads since deposit
34Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item