Sudre, CH;
Anson, BG;
Ingala, S;
Lane, CD;
Jimenez, D;
Haider, L;
Varsavsky, T;
... Cardoso, MJ; + view all
(2019)
Let's Agree to Disagree: Learning Highly Debatable Multirater Labelling.
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019
, 11767
pp. 665-673.
10.1007/978-3-030-32251-9_73.
Preview |
Text
1909.01891v1.pdf - Accepted Version Download (481kB) | Preview |
Abstract
Classification and differentiation of small pathological objects may greatly vary among human raters due to differences in training, expertise and their consistency over time. In a radiological setting, objects commonly have high within-class appearance variability whilst sharing certain characteristics across different classes, making their distinction even more difficult. As an example, markers of cerebral small vessel disease, such as enlarged perivascular spaces (EPVS) and lacunes, can be very varied in their appearance while exhibiting high inter-class similarity, making this task highly challenging for human raters. In this work, we investigate joint models of individual rater behaviour and multi-rater consensus in a deep learning setting, and apply it to a brain lesion object-detection task. Results show that jointly modelling both individual and consensus estimates leads to significant improvements in performance when compared to directly predicting consensus labels, while also allowing the characterization of human-rater consistency.
Archive Staff Only
View Item |