eprintid: 10193105
rev_number: 10
eprint_status: archive
userid: 699
dir: disk0/10/19/31/05
datestamp: 2024-08-20 14:55:40
lastmod: 2024-08-20 14:55:40
status_changed: 2024-08-20 14:55:40
type: thesis
metadata_visibility: show
sword_depositor: 699
creators_name: Yu, Changmin
title: Hippocampus-Inspired Representation Learning for Artificial Agents
ispublished: unpub
divisions: UCL
divisions: B04
divisions: C05
divisions: F48
note: Copyright © The Author 2024.  Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/).  Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms.  Access may initially be restricted at the author’s request.
abstract: Spatial representations found in the hippocampal formation of freely moving mammals, such as those of grid cells, appear optimal for spatial navigation, and also afford flexible and generalisable non-spatial behaviours. In this thesis, I propose models for learning and representing the structure underlying high-dimensional observation space in artificial agents, drawing inspiration from hippocampal neuroscience.

In the first part of the thesis, I study the construction and identification of latent representations. I propose a novel model for grid cell firing based on Fourier analysis of
translation-invariant transition dynamics. I show that effects of arbitrary actions can be predicted using a single neural representation and action-dependent weight modulation, and how this model unifies existing models of grid cells based on predictive planning, continuous attractors, and oscillatory interference. Next, I consider the problem of unsupervised learning of the structured latent manifold underlying population neuronal spiking, such that interdependent behavioural variables can be accurately decoded. I propose a novel amortised inference framework such that the recognition networks explicitly parametrise the posterior latent dependency structure, relaxing the full-factorisation assumption.

In the second part, I propose representation learning methods inspired by neuroscience and study their application in reinforcement learning. Inspired by the observation of hippocampal “replay” in both temporally forward and backward directions, I show that incorporating temporally backward predictive reconstruction self-supervision into training world models leads to better sample efficiency and stronger generalisability on continuous control tasks. I then
propose a novel intrinsic exploration framework under a similar premise, where the intrinsic novelty bonus is constructed based on both prospective and retrospective information. The resulting agents exhibit higher exploration efficiency and ethologically plausible exploration strategies.

I conclude by discussing the general implications of learning and utilisation of latent structures in both artificial and biological intelligence, and potential applications of neural-inspired representation learning beyond reinforcement learning.
date: 2024-06-28
date_type: published
oa_status: green
full_text_type: other
thesis_class: doctoral_open
thesis_award: Ph.D
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 2282968
lyricists_name: Yu, Changmin
lyricists_id: CYUAX83
actors_name: Yu, Changmin
actors_id: CYUAX83
actors_role: owner
full_text_status: public
pages: 202
institution: UCL (University College London)
department: Computer Science
thesis_type: Doctoral
citation:        Yu, Changmin;      (2024)    Hippocampus-Inspired Representation Learning for Artificial Agents.                   Doctoral thesis  (Ph.D), UCL (University College London).     Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10193105/1/thesis_final.pdf