UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images

Mitry, D; Zutis, K; Dhillon, B; Peto, T; Hayat, S; Khaw, KT; Morgan, JE; ... UK Biobank Eye and Vision Consortium; + view all (2016) The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images. Translational Vision Science & Technology , 5 (5) , Article 6. 10.1167/tvst.5.5.6. Green open access

[thumbnail of The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images.pdf]
Preview
Text
The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images.pdf - Published Version

Download (627kB) | Preview

Abstract

PURPOSE: Crowdsourcing is based on outsourcing computationally intensive tasks to numerous individuals in the online community who have no formal training. Our aim was to develop a novel online tool designed to facilitate large-scale annotation of digital retinal images, and to assess the accuracy of crowdsource grading using this tool, comparing it to expert classification. METHODS: We used 100 retinal fundus photograph images with predetermined disease criteria selected by two experts from a large cohort study. The Amazon Mechanical Turk Web platform was used to drive traffic to our site so anonymous workers could perform a classification and annotation task of the fundus photographs in our dataset after a short training exercise. Three groups were assessed: masters only, nonmasters only and nonmasters with compulsory training. We calculated the sensitivity, specificity, and area under the curve (AUC) of receiver operating characteristic (ROC) plots for all classifications compared to expert grading, and used the Dice coefficient and consensus threshold to assess annotation accuracy. RESULTS: In total, we received 5389 annotations for 84 images (excluding 16 training images) in 2 weeks. A specificity and sensitivity of 71% (95% confidence interval [CI], 69%-74%) and 87% (95% CI, 86%-88%) was achieved for all classifications. The AUC in this study for all classifications combined was 0.93 (95% CI, 0.91-0.96). For image annotation, a maximal Dice coefficient (∼0.6) was achieved with a consensus threshold of 0.25. CONCLUSIONS: This study supports the hypothesis that annotation of abnormalities in retinal images by ophthalmologically naive individuals is comparable to expert annotation. The highest AUC and agreement with expert annotation was achieved in the nonmasters with compulsory training group. TRANSLATIONAL RELEVANCE: The use of crowdsourcing as a technique for retinal image analysis may be comparable to expert graders and has the potential to deliver timely, accurate, and cost-effective image analysis.

Type: Article
Title: The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images
Location: United States
Open access status: An open access version is available from UCL Discovery
DOI: 10.1167/tvst.5.5.6
Publisher version: http://dx.doi.org/10.1167/tvst.5.5.6
Language: English
Additional information: This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nc-nd/4.0).
Keywords: crowdsourcing, image analysis, retina
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Institute of Ophthalmology
URI: https://discovery.ucl.ac.uk/id/eprint/1524249
Downloads since deposit
87Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item