UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Hippocampus-Inspired Representation Learning for Artificial Agents

Yu, Changmin; (2024) Hippocampus-Inspired Representation Learning for Artificial Agents. Doctoral thesis (Ph.D), UCL (University College London). Green open access

[thumbnail of thesis_final.pdf]
Preview
Text
thesis_final.pdf - Other

Download (34MB) | Preview

Abstract

Spatial representations found in the hippocampal formation of freely moving mammals, such as those of grid cells, appear optimal for spatial navigation, and also afford flexible and generalisable non-spatial behaviours. In this thesis, I propose models for learning and representing the structure underlying high-dimensional observation space in artificial agents, drawing inspiration from hippocampal neuroscience. In the first part of the thesis, I study the construction and identification of latent representations. I propose a novel model for grid cell firing based on Fourier analysis of translation-invariant transition dynamics. I show that effects of arbitrary actions can be predicted using a single neural representation and action-dependent weight modulation, and how this model unifies existing models of grid cells based on predictive planning, continuous attractors, and oscillatory interference. Next, I consider the problem of unsupervised learning of the structured latent manifold underlying population neuronal spiking, such that interdependent behavioural variables can be accurately decoded. I propose a novel amortised inference framework such that the recognition networks explicitly parametrise the posterior latent dependency structure, relaxing the full-factorisation assumption. In the second part, I propose representation learning methods inspired by neuroscience and study their application in reinforcement learning. Inspired by the observation of hippocampal “replay” in both temporally forward and backward directions, I show that incorporating temporally backward predictive reconstruction self-supervision into training world models leads to better sample efficiency and stronger generalisability on continuous control tasks. I then propose a novel intrinsic exploration framework under a similar premise, where the intrinsic novelty bonus is constructed based on both prospective and retrospective information. The resulting agents exhibit higher exploration efficiency and ethologically plausible exploration strategies. I conclude by discussing the general implications of learning and utilisation of latent structures in both artificial and biological intelligence, and potential applications of neural-inspired representation learning beyond reinforcement learning.

Type: Thesis (Doctoral)
Qualification: Ph.D
Title: Hippocampus-Inspired Representation Learning for Artificial Agents
Open access status: An open access version is available from UCL Discovery
Language: English
Additional information: Copyright © The Author 2024. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10193105
Downloads since deposit
Loading...
24Downloads
Download activity - last month
Loading...
Download activity - last 12 months
Loading...
Downloads by country - last 12 months
Loading...

Archive Staff Only

View Item View Item