UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Weakly-supervised convolutional neural networks for multimodal image registration

Hu, Y; Modat, M; Gibson, E; Li, W; Ghavami, N; Bonmati, E; Wang, G; ... Vercauteren, T; + view all (2018) Weakly-supervised convolutional neural networks for multimodal image registration. Medical Image Analysis , 49 pp. 1-13. 10.1016/j.media.2018.07.002. Green open access

[thumbnail of 1-s2.0-S1361841518301051-main.pdf]
Preview
Text
1-s2.0-S1361841518301051-main.pdf - Published Version

Download (3MB) | Preview

Abstract

One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.

Type: Article
Title: Weakly-supervised convolutional neural networks for multimodal image registration
Location: Netherlands
Open access status: An open access version is available from UCL Discovery
DOI: 10.1016/j.media.2018.07.002
Publisher version: https://doi.org/10.1016/j.media.2018.07.002
Language: English
Additional information: Copyright © 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license. (http://creativecommons.org/licenses/by/4.0/).
Keywords: Convolutional neural network, Image-guided intervention, Medical image registration, Prostate cancer, Weakly-supervised learning
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Medical Sciences > Div of Medicine
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Medical Sciences > Div of Surgery and Interventional Sci
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Medical Sciences > Div of Surgery and Interventional Sci > Department of Targeted Intervention
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Med Phys and Biomedical Eng
URI: https://discovery.ucl.ac.uk/id/eprint/10052331
Downloads since deposit
95Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item