Li, Yazhe;
(2025)
Advancing Representation Learning: Learning and Evaluation of Representations in Deep Neural Networks.
Doctoral thesis (Ph.D), UCL (University College London).
Preview |
Text
Thesis_Final.pdf - Accepted Version Download (12MB) | Preview |
Abstract
Data representation, through meaningful feature learning, is crucial for building robust machine learning models. Early efforts of representation learning focus on unsupervised dimensionality reduction. Recently, Deep Neural Networks (DNNs) emerged as a powerful tool for feature learning. Despite being trained on classification tasks, DNNs develop increasingly abstract representations at deeper layers. However, these features often lack robustness and exhibit vulnerability to domain shifts. Similarly, DNNs can be trained without labels, yielding network weights which are either used as frozen feature extractors or initialization for further fine-tuning. Currently, unsupervised representation learning methods primarily fall into three categories: contrastive/self-distillation, masked prediction and generation, all of which find widespread applications in large-scale multimodal pre-training frameworks. The thesis is structured into two parts, each addressing a fundamental aspect of representation learning: 1. Learning Visual Representations with DNNs. We explore the landscape of learning good representations within both unsupervised and supervised learning paradigms. In the unsupervised realm, we present two novel methods: SSL-HSIC (self-supervised learning with Hilbert-Schmidt independence criterion), an approach that utilizes a kernel-based loss function to achieve a similar effect of contrastive learning; and DARL (denoising autoregressive representation learning), a generative method directly targets pixel-level reconstruction. For supervised learning, we investigate causal representation learning and propose CIRCE (conditional independence regression covariance), a method that encourages the conditional independence constraint to enhance the robustness of representations under domain shifts. 2. Evaluation of Representations. Framing representation evaluation as a model selection problem, we leverage the Minimum Description Length principle and develop an evaluation metric that effectively assesses the quality of learned representations. This thesis contributes to the advancement of representation learning by proposing novel methods for both unsupervised and supervised settings, while simultaneously addressing the crucial challenge of representation evaluation through principled and effective approaches.
Type: | Thesis (Doctoral) |
---|---|
Qualification: | Ph.D |
Title: | Advancing Representation Learning: Learning and Evaluation of Representations in Deep Neural Networks |
Open access status: | An open access version is available from UCL Discovery |
Language: | English |
Additional information: | Copyright © The Author 2025. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery.ucl.ac.uk/id/eprint/10204006 |




Archive Staff Only
![]() |
View Item |