UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Probabilistic Unsupervised Learning using Recognition Parameterized Models

Walker, William; (2024) Probabilistic Unsupervised Learning using Recognition Parameterized Models. Doctoral thesis (Ph.D), UCL (University College London). Green open access

[thumbnail of William_Walker_Thesis_Final.pdf]
Preview
Text
William_Walker_Thesis_Final.pdf - Accepted Version

Download (8MB) | Preview

Abstract

Animals and some artificial agents must use sensory information about the environment to infer representations that allow for further action selection and cognition. This sensory information often is unlabelled and latent statistical structure must be found in noisy observations. The recognition-parametrized model (RPM) is a novel approach to such probabilistic unsupervised learning that incorporates a flexible recognition model, giving posteriors on latent variables, without the need for a separate generate model. As a normalized semi-parametric hypothesis class for joint distributions over observed and latent variables, the RPM parameterizes both the prior distribution on the latents and their conditional distributions given observations under the key assumption that observations are conditionally independent given the latents. The recognition model is paired with non-parametric descriptions of the marginal distribution of each observed variable. Thus, the focus is on learning a good latent representation that captures dependence between the measurements. The RPM permits exact maximum-likelihood learning in settings with discrete latents and a tractable prior, even when the mapping between continuous observations and the latents is expressed through a flexible model such as a neural network. We develop effective approximations for the case of continuous latent variables with tractable priors. Unlike the approximations necessary in dual-parametrized models such as Helmholtz machines and variational autoencoders, these RPM approximations introduce only minor bias, which may often vanish asymptotically. Furthermore, where the prior on latents is intractable the RPM may be combined effectively with standard probabilistic techniques such as variational Bayes. The RPM provides an effective way to discover, represent and reason probabilistically about the latent structure underlying observational data, functions which are critical to both animal and artificial intelligence.

Type: Thesis (Doctoral)
Qualification: Ph.D
Title: Probabilistic Unsupervised Learning using Recognition Parameterized Models
Open access status: An open access version is available from UCL Discovery
Language: English
Additional information: Copyright © The Author 2024. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences > Gatsby Computational Neurosci Unit
URI: https://discovery.ucl.ac.uk/id/eprint/10188118
Downloads since deposit
83Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item