Lawry Aguila, Ana;
(2024)
Unsupervised and multi-modal representation learning for studying heterogeneous neurological diseases.
Doctoral thesis (Ph.D), UCL (University College London).
![]() |
Text
Thesis_corrections.pdf - Accepted Version Access restricted to UCL open access staff until 1 May 2025. Download (67MB) |
Abstract
One of the challenges of studying common neurological disorders is disease heterogeneity including differences in causes, neuroimaging characteristics, comorbidities, or genetic variation. Because of this, disease labels are often poorly defined, if available at all, making supervised learning methods unsuitable for studying heterogeneous diseases. Normative modelling is one approach to studying heterogeneous brain disorders by quantifying how brain imaging-based measures of individuals deviate from a healthy population. The normative model provides a statistical description of the ‘normal’ range that can be used at subject level to detect deviations, which relate to pathological effects. Traditional normative models use a mass-univariate approach which is computationally costly and ignores the interactions and dependencies among multiple brain regions. This thesis introduces deep learning-based normative models, using autoencoders, and accompanying deviation metrics in the latent and feature space. For many neurological diseases, we expect to observe abnormalities across a range of neuroimaging and biological variables. However, existing normative models have predominantly focused on studying a single imaging modality. In this thesis, we develop a multi-modal normative modelling framework using multi-modal Variational Autoencoders. Aggregating abnormality across variables of multiple modalities, this framework proves more effective in detecting deviations compared to uni-modal baselines. The multi-modal normative models developed in this work are then applied to complex, real-world datasets for the study of epilepsy and early-stage Alzheimer's disease. Multi-modal autoencoders are increasingly being applied to biomedical data. However, comparing different approaches remains a challenge as existing implementations, if available, use different deep learning frameworks and programming styles. To address this issue, we develop a Python library of multi-modal autoencoder implementations, accompanied by educational materials. This library forms the foundation for all multi-modal normative modelling analyses conducted in this thesis.
Type: | Thesis (Doctoral) |
---|---|
Qualification: | Ph.D |
Title: | Unsupervised and multi-modal representation learning for studying heterogeneous neurological diseases |
Language: | English |
Additional information: | Copyright © The Author 2022. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Med Phys and Biomedical Eng |
URI: | https://discovery.ucl.ac.uk/id/eprint/10191204 |



1. | ![]() | 2 |
Archive Staff Only
![]() |
View Item |