Acero Marchesotti, Fernando;
(2024)
Embodied Artificial Intelligence: Advanced Deep Reinforcement Learning for Robot Sensorimotor Control.
Doctoral thesis (Ph.D), UCL (University College London).
![]() |
Text
Acero Marchesotti_10201036_thesis.pdf Access restricted to UCL open access staff until 1 July 2025. Download (109MB) |
Abstract
This thesis presents contributions to advance robot sensorimotor control capabilities through deep reinforcement learning, motivated by the ultimate goal to enable robots to become artificially intelligent embodied agents. The contributions primarily relate to robot locomotion, with additional contributions to robot manipulation and grasping, and address relevant problems to enable reinforcement learning to advance robot control capabilities. As a preliminary step, a literature review of robot control methodologies and paradigms is provided, including optimization-based and machine learning-based approaches, framing the contributions of this thesis within the realm of robot control. The thesis then addresses questions related to the advancement of embodied artificial intelligence through deep reinforcement learning. Firstly, the question of incorporating exteroception into sensorimotor control is addressed by proposing a framework for devising perceptual locomotion policies that use sparse visual observations. With a reinforcement learning curriculum, this enables terrain-aware locomotion policies with robust behaviour over uneven terrains. Secondly, the limited interpretability and explainability of neural network policies is addressed by developing a framework for efficiently distilling neural policies trained via reinforcement learning into more interpretable architectures, such as decision trees and symbolic expressions, which are not directly trainable via policy gradients. Three different architectures are used to elucidate whether non-neural architectures can yield performant policies, with positive results. Thirdly, a study of recurrent neural architectures as policies is performed to investigate their performance for six robot locomotion skills, including discrete-time architectures and recently developed continuous-time liquid neural networks. The analysis leverages learned state estimators and the recently introduced reward machines, introducing two novel state machine formulations. Lastly, in relation to robot manipulation and grasping, the question of how to incorporate multiple sensorimotor skills to enable broad robot capabilities is addressed by developing hierarchical reinforcement learning frameworks. Overall, the aforementioned contributions have advanced robot sensorimotor control further towards embodied artificial intelligence.
Type: | Thesis (Doctoral) |
---|---|
Qualification: | Ph.D |
Title: | Embodied Artificial Intelligence: Advanced Deep Reinforcement Learning for Robot Sensorimotor Control |
Language: | English |
Additional information: | Copyright © The Author 2024. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery.ucl.ac.uk/id/eprint/10201036 |




Archive Staff Only
![]() |
View Item |