UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Responsibility Attributions towards Artificial Agents

Franklin, Matija; (2025) Responsibility Attributions towards Artificial Agents. Doctoral thesis (Ph.D), UCL (University College London).

[thumbnail of Franklin__thesis.pdf] Text
Franklin__thesis.pdf - Accepted Version
Access restricted to UCL open access staff until 1 September 2026.

Download (12MB)

Abstract

This PhD systematically investigates how responsibility is attributed to human and artificial agents, addressing challenges posed by the integration of autonomous systems into societal decision-making. Conducted amidst rapid advancements in artificial intelligence, this research examines how factors such as intent, foresight, causality, capability, role, prevention effort, and desire shape responsibility judgments. Drawing on interdisciplinary theories from psychology, law, and philosophy, the research develops a comprehensive framework for understanding and predicting responsibility attributions in complex, multi-agent systems. The thesis comprises three interconnected chapters. Chapter One explores how intent, foresight, and desire influence blame attributions toward human and AI agents, using experimental methods to bridge intuitive judgments with formal legal constructs of intent. Chapter Two examines the roles of capability and social positions— such as advisor or employee—in shaping responsibility judgments across diverse contexts. This chapter highlights how perceptions of capability and role interact to influence attributions of blame and accountability. Chapter Three integrates these findings into a holistic framework, leveraging innovative methodologies including serious games and social media analysis to investigate responsibility attribution in real-world and experimental settings. Key findings reveal that human agents are judged predominantly on mental-state factors (e.g., intentionality, foreseeability), whereas machine agents are assessed more on role-specific capacities and prevention efforts. The research also identifies an anchoring–saturation updating pattern: a single high-diagnostic factor can anchor blame judgments, after which additional cues have diminishing marginal impact; when early cues are low-impact, later diagnostic information produces larger upward revisions. The work further highlights blame diffusion, with judgments extending beyond individual agents to systemic entities such as companies and programmers. By combining theoretical insight with ecologically valid evidence, this PhD advances understanding of responsibility attribution in human–AI interactions and offers actionable implications for AI ethics, governance, and public trust.

Type: Thesis (Doctoral)
Qualification: Ph.D
Title: Responsibility Attributions towards Artificial Agents
Language: English
Additional information: Copyright © The Author 2025. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
URI: https://discovery.ucl.ac.uk/id/eprint/10212272
Downloads since deposit
1Download
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item