Martin, Sophie;
(2025)
Explainable Artificial Intelligence for
Dementia: Applications and Validation for
Clinical Translation.
Doctoral thesis (Ph.D), UCL (University College London).
|
Text
Martin_10211446_Thesis.pdf Access restricted to UCL open access staff until 1 February 2026. Download (62MB) |
Abstract
The increasing prevalence of dementia worldwide has highlighted the urgent need for early diagnosis and timely intervention. Artificial intelligence (AI) has emerged as a promising tool for improving diagnostic efficiency and furthering our understanding of neurodegenerative diseases. However, ongoing challenges such as lack of transparency and poor generalisability have limited their use in routine care. This thesis explores the application of explainable artificial intelligence (XAI) to black box dementia prediction models. A review of current literature identified a critical gap in the evaluation of explanation techniques, motivating the experimental work. Neuroimaging data from multi-centre research studies were combined with well-established machine learning and deep learning approaches to address two tasks: distinguishing individuals with Alzheimer’s dementia from cognitively normal participants and predicting dementia onset within three years among individuals with cognitive impairment. This enabled a comprehensive evaluation of ten popular explanation techniques, revealing a focus on brain regions associated with Alzheimer’s disease pathology, such as the hippocampus. To support individual-level validation, amyloid position emission tomography (PET) was used to quantify the correspondence between salient features and regional amyloid burden. While two techniques demonstrated strong alignment with PET measures, this analysis also highlighted the variability and limitations of different methods. Data from a memory clinic cohort was then used to explore the generalisability of these findings in a real-world context. Models were evaluated on their ability to predict future dementia diagnosis up to 7 years in advance within a classification and survival analysis framework. Fairness metrics and model explanations were generated to probe model robustness and explore how XAI can support prediction interpretation. This thesis contributes to the growing field of explainable artificial intelligence in dementia research, offering insights into its potential to enhance diagnostic pathways and facilitate the integration of AI in clinical practice.
| Type: | Thesis (Doctoral) |
|---|---|
| Qualification: | Ph.D |
| Title: | Explainable Artificial Intelligence for Dementia: Applications and Validation for Clinical Translation |
| Language: | English |
| Additional information: | Copyright © The Author 2025. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
| UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
| URI: | https://discovery.ucl.ac.uk/id/eprint/10211446 |
Archive Staff Only
![]() |
View Item |

