eprintid: 10191204 rev_number: 12 eprint_status: archive userid: 699 dir: disk0/10/19/12/04 datestamp: 2024-05-23 14:12:36 lastmod: 2024-05-23 14:12:36 status_changed: 2024-05-23 14:12:36 type: thesis metadata_visibility: show sword_depositor: 699 creators_name: Lawry Aguila, Ana title: Unsupervised and multi-modal representation learning for studying heterogeneous neurological diseases ispublished: unpub divisions: UCL divisions: B04 divisions: C05 divisions: F42 note: Copyright © The Author 2022. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. abstract: One of the challenges of studying common neurological disorders is disease heterogeneity including differences in causes, neuroimaging characteristics, comorbidities, or genetic variation. Because of this, disease labels are often poorly defined, if available at all, making supervised learning methods unsuitable for studying heterogeneous diseases. Normative modelling is one approach to studying heterogeneous brain disorders by quantifying how brain imaging-based measures of individuals deviate from a healthy population. The normative model provides a statistical description of the ‘normal’ range that can be used at subject level to detect deviations, which relate to pathological effects. Traditional normative models use a mass-univariate approach which is computationally costly and ignores the interactions and dependencies among multiple brain regions. This thesis introduces deep learning-based normative models, using autoencoders, and accompanying deviation metrics in the latent and feature space. For many neurological diseases, we expect to observe abnormalities across a range of neuroimaging and biological variables. However, existing normative models have predominantly focused on studying a single imaging modality. In this thesis, we develop a multi-modal normative modelling framework using multi-modal Variational Autoencoders. Aggregating abnormality across variables of multiple modalities, this framework proves more effective in detecting deviations compared to uni-modal baselines. The multi-modal normative models developed in this work are then applied to complex, real-world datasets for the study of epilepsy and early-stage Alzheimer's disease. Multi-modal autoencoders are increasingly being applied to biomedical data. However, comparing different approaches remains a challenge as existing implementations, if available, use different deep learning frameworks and programming styles. To address this issue, we develop a Python library of multi-modal autoencoder implementations, accompanied by educational materials. This library forms the foundation for all multi-modal normative modelling analyses conducted in this thesis. date: 2024-04-28 date_type: published full_text_type: other thesis_class: doctoral_embargoed thesis_award: Ph.D language: eng verified: verified_manual elements_id: 2270136 lyricists_name: Lawry Aguila, Ana lyricists_id: ALAWR72 actors_name: Lawry Aguila, Ana actors_id: ALAWR72 actors_role: owner full_text_status: restricted pages: 203 institution: UCL (University College London) department: Medical Physics and Biomedical Engineering thesis_type: Doctoral editors_name: Altmann, Andre citation: Lawry Aguila, Ana; (2024) Unsupervised and multi-modal representation learning for studying heterogeneous neurological diseases. Doctoral thesis (Ph.D), UCL (University College London). document_url: https://discovery.ucl.ac.uk/id/eprint/10191204/2/Thesis_corrections.pdf