TY  - INPR
TI  - Artificial Intelligence, Trust, and Perceptions of Agency
UR  - https://doi.org/10.5465/amr.2022.0041
PB  - Academy of Management
A1  - Vanneste, Bart
A1  - Puranam, Phanish
Y1  - 2024/12/01/
N2  - Modern artificial intelligence (AI) technologies based on deep learning architectures are often perceived as agentic to varying degrees?typically, as more agentic than other technologies but less agentic than humans. We theorize how different levels of perceived agency of AI affect human trust in AI. We do so by investigating three causal pathways. First, an AI (and its designer) perceived as more agentic will be seen as more capable, and therefore will be perceived as more trustworthy. Second, the more the AI is perceived as agentic, the more important are trustworthiness perceptions about the AI relative to those about its designer. Third, because of betrayal aversion, the anticipated psychological cost of the AI violating trust increases with how agentic it is perceived to be. These causal pathways imply, perhaps counterintuitively, that making an AI appear more agentic may increase or decrease the trust that humans place in it: success at meeting the Turing test may go hand in hand with a decrease of trust in AI. We formulate propositions linking agency perceptions to trust in AI, by exploiting variations in the context in which the human?AI interaction occurs and the dynamics of trust updating.
AV  - restricted
JF  - Academy of Management Review
SN  - 0363-7425
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions.
ID  - discovery10186132
ER  -