Colecchia, F;
Ruffle, JK;
Pombo, GC;
Gray, R;
Hyare, H;
Nachev, P;
(2020)
Knowledge-driven deep neural network models for brain tumour segmentation.
In:
Journal of Physics: Conference Series.
IOP
Preview |
Text
Colecchia_2020_J._Phys.__Conf._Ser._1662_012010.pdf - Published Version Download (676kB) | Preview |
Abstract
Image segmentation is a computer vision task aiming to establish a probabilistic mapping between individual pixels (2D) or voxels (3D) in an input image and a set of predefined semantic categories with reference to domain-specific knowledge. When applied to medical images, e.g. Magnetic Resonance Imaging (MRI), it allows delineation between healthy and abnormal tissue. Despite challenges due to lesion morphological heterogeneity, segmentation of brain tumours has the potential to streamline otherwise time-consuming manual annotation. Whereas brain tumour segmentation has continually advanced incorporating innovative deep learning methods, heuristics normally employed by radiologists have often been neglected. The focus of nearly all tumour segmentation articles thus far on 3D isotropic research-grade scans has also led to results of unknown generalisability to hospital-quality data. In order to address these gaps, this study has coalesced modern deep learning methods and clinical-driven priors into an optimised segmentation pipeline evaluated on clinical data at a large neurology and neurosurgery tertiary centre.
Archive Staff Only
View Item |