Shapey, J;
Kujawa, A;
Dorent, R;
Wang, G;
Dimitriadis, A;
Grishchuk, D;
Paddick, I;
... Vercauteren, T; + view all
(2021)
Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm.
Scientific Data
, 8
, Article 286. 10.1038/s41597-021-01064-w.
Preview |
Text
s41597-021-01064-w.pdf - Published Version Download (1MB) | Preview |
Abstract
Automatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models.
Type: | Article |
---|---|
Title: | Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm |
Location: | England |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1038/s41597-021-01064-w |
Publisher version: | https://doi.org/10.1038/s41597-021-01064-w |
Language: | English |
Additional information: | © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Te images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Te Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/zero/1.0/ applies to the metadata fles associated with this article. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > The Ear Institute UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Med Phys and Biomedical Eng |
URI: | https://discovery.ucl.ac.uk/id/eprint/10137674 |
Archive Staff Only
View Item |