TY  - JOUR
KW  - Trust
KW  -  artificial intelligence
KW  -  human-AI interaction
KW  -  health care
KW  -  healthcare AI
KW  -  systematic review
KW  -  MMAT
KW  -  logic model
TI  - Influences on User Trust in Healthcare Artificial Intelligence: A Systematic Review
UR  - https://doi.org/10.12688/wellcomeopenres.17550.1
AV  - public
JF  - Wellcome Open Research
N1  - © 2022 Jermutus E et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ID  - discovery10143985
VL  - 7
PB  - F1000 Research Ltd
Y1  - 2022/02/18/
A1  - Jermutus, Eva
A1  - Kneale, Dylan
A1  - Thomas, James
A1  - Michie, Susan
N2  - BACKGROUND: 
Artificial Intelligence (AI) is becoming increasingly prominent in domains such as healthcare. It is argued to be transformative through altering the way in which healthcare data is used. The realisation and success of AI depend heavily on people?s trust in its applications. Yet, influences on trust in healthcare AI (HAI) applications so far have been underexplored. The objective of this study was to identify aspects related to users, AI applications and the wider context influencing trust in HAI.

METHODS: 
We performed a systematic review to map out influences on user trust in HAI. To identify relevant studies, we searched seven electronic databases in November 2019 (ACM digital library, IEEE Explore, NHS Evidence, ProQuest Dissertations & Thesis Global, PsycINFO, PubMed, Web of Science Core Collection). Searches were restricted to publications available in English and German. To be included studies had to be empirical; focus on an AI application (excluding robotics) in a health-related setting; and evaluate applications with regards to users.

RESULTS: 
Three studies, one mixed-method and two qualitative studies in English were included. Influences on trust fell into three broad categories: human-related (knowledge, expectation, mental model, self-efficacy, type of user, age, gender), AI-related (data privacy and safety, operational safety, transparency, design, customizability, trialability, explainability, understandability, power-control-balance, benevolence) and context-related (AI company, media, users? social network). The factors resulted in an updated logic model illustrating the relationship between these aspects.

CONCLUSIONS: 
Trust in HAI depends on a variety of factors, both external and internal to AI applications. This study contributes to our understanding of what influences trust in HAI by highlighting key influences, as well as pointing to gaps and issues in existing research on trust and AI. In so doing, it offers a starting point for further investigation of trust environments as well as trustworthy AI applications.
ER  -