Schoeller, F;
Miller, M;
Salomon, R;
Friston, KJ;
(2021)
Trust as Extended Control: Human-Machine Interactions as Active Inference.
Frontiers in Systems Neuroscience
, 15
, Article 669810. 10.3389/fnsys.2021.669810.
Preview |
Text
fnsys-15-669810.pdf - Published Version Download (1MB) | Preview |
Abstract
In order to interact seamlessly with robots, users must infer the causes of a robot's behavior-and be confident about that inference (and its predictions). Hence, trust is a necessary condition for human-robot collaboration (HRC). However, and despite its crucial role, it is still largely unknown how trust emerges, develops, and supports human relationship to technological systems. In the following paper we review the literature on trust, human-robot interaction, HRC, and human interaction at large. Early models of trust suggest that it is a trade-off between benevolence and competence; while studies of human to human interaction emphasize the role of shared behavior and mutual knowledge in the gradual building of trust. We go on to introduce a model of trust as an agent' best explanation for reliable sensory exchange with an extended motor plant or partner. This model is based on the cognitive neuroscience of active inference and suggests that, in the context of HRC, trust can be casted in terms of virtual control over an artificial agent. Interactive feedback is a necessary condition to the extension of the trustor's perception-action cycle. This model has important implications for understanding human-robot interaction and collaboration-as it allows the traditional determinants of human trust, such as the benevolence and competence attributed to the trustee, to be defined in terms of hierarchical active inference, while vulnerability can be described in terms of information exchange and empowerment. Furthermore, this model emphasizes the role of user feedback during HRC and suggests that boredom and surprise may be used in personalized interactions as markers for under and over-reliance on the system. The description of trust as a sense of virtual control offers a crucial step toward grounding human factors in cognitive neuroscience and improving the design of human-centered technology. Furthermore, we examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration, suggesting important consequences for the acceptability and design of human-robot collaborative systems.
Type: | Article |
---|---|
Title: | Trust as Extended Control: Human-Machine Interactions as Active Inference |
Location: | Switzerland |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.3389/fnsys.2021.669810 |
Publisher version: | https://doi.org/10.3389/fnsys.2021.669810 |
Language: | English |
Additional information: | This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ |
Keywords: | Active inference, cobotics, control, extended mind hypothesis, human computer interaction, human-robot interaction, trust |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > UCL Queen Square Institute of Neurology > Imaging Neuroscience |
URI: | https://discovery.ucl.ac.uk/id/eprint/10137816 |
Archive Staff Only
View Item |