Kay, Jackie;
(2025)
Imitation, Identity, and Injustice in Artificial Intelligence.
Doctoral thesis (Ph.D), UCL (University College London).
Preview |
Text
thesis (7).pdf - Accepted Version Download (5MB) | Preview |
Abstract
Replicating human behavior is a popular goal of AI system design. Imitation learning is an established subfield dedicated to this objective, in which a neural network is optimized to imitate trajectories from a data distribution originating from an expert. Human-like qualities can also emerge unexpectedly, due to properties of the training data or other design parameters. This unintentional imitation--or the failure to achieve the goal of imitation–-can have undesirable consequences when AI is deployed in the real world. This thesis explores the imitation of humans in artificial intelligence, from its technical dimensions to philosophical questions and ethical implications. To first illustrate imitation as a method, I present research on building a general-purpose motor intelligence: a dataset gathered by teleoperation, and a foundation model trained via imitation learning. I then widen my concerns to imitation as a goal, by studying how machines might imitate human social identity. Many existing classification systems fail to operationalize a nuanced theory of identity, resulting in the exacerbation of social injustice. I propose technical interventions for meeting our proposed definition of identity. Finally, I turn away from aiming to imitate human qualities, instead studying how injustice typically perpetrated by humans emerges in AI. I draw on the established philosophical theory of epistemic injustice to study how unique forms of it arise in applications of generative AI. I conclude by imagining what the field of artificial intelligence could look like beyond the anthropomorphic boundaries of imitation. Throughout this thesis, I alternate between perspectives on imitation as a method, goal, and an emergence to gain a holistic insight on how the automated imitation of humanity impacts all levels of society--whether intentional or accidental. This triangulation enables an interdisciplinary reflection on the technical and ethical responsibilities of machine learning practitioners.
Type: | Thesis (Doctoral) |
---|---|
Qualification: | Ph.D |
Title: | Imitation, Identity, and Injustice in Artificial Intelligence |
Open access status: | An open access version is available from UCL Discovery |
Language: | English |
Additional information: | Copyright © The Author 2025. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery.ucl.ac.uk/id/eprint/10205347 |



1. | ![]() | 21 |
2. | ![]() | 14 |
3. | ![]() | 6 |
4. | ![]() | 4 |
5. | ![]() | 3 |
6. | ![]() | 2 |
7. | ![]() | 2 |
8. | ![]() | 1 |
9. | ![]() | 1 |
10. | ![]() | 1 |
Archive Staff Only
![]() |
View Item |