Feldman-Maggor, Yael;
Cukurova, Mutlu;
Kent, Carmel;
Alexandron, Giora;
(2025)
The Impact of Explainable AI on Teachers’ Trust and Acceptance of AI EdTech Recommendations: The Power of Domain-specific Explanations.
International Journal of Artificial Intelligence in Education
10.1007/s40593-025-00486-6.
(In press).
Preview |
Text
s40593-025-00486-6.pdf - Accepted Version Download (2MB) | Preview |
Abstract
Trust is crucial for teachers’ adoption of AI-enhanced educational technologies (AI-EdTech), yet how this trust is formed and maintained remains poorly understood. An aspect of the system design that seems profoundly related to trust is transparency, which can be achieved through explainable AI (XAI) approaches. The present study seeks to explore the dynamic nature of teachers’ trust in AI EdTech systems, how it relates to understandability, and XAI’s role in enhancing it. Building upon Hoff and Bashir’s ‘trust in automation’ model (2015), we propose a theoretical model that connects these factors. We validated the applicability of the proposed model to AI in Education context using a mixed-method, within-subject design that measured understandability, trust, and acceptance of AI recommendations among 41 in-service chemistry teachers. The results showed a significant positive correlation between the three factors, as anticipated by the model, and demonstrated the heterogeneous understandability of different XAI schemes, with domain-driven schemes superior to data-driven ones. In addition, the study reveals two additional factors influencing teachers’ adoption of AI-EdTech: pedagogical perspectives and workload reduction potential. The study provides a theoretical explanation of how different XAI schemes impact trust through understandability. Furthermore, it emphasizes the need for greater attention to XAI, which fosters trust and facilitates the acceptance of AI-EdTech.
Type: | Article |
---|---|
Title: | The Impact of Explainable AI on Teachers’ Trust and Acceptance of AI EdTech Recommendations: The Power of Domain-specific Explanations |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1007/s40593-025-00486-6 |
Publisher version: | https://doi.org/10.1007/s40593-025-00486-6 |
Language: | English |
Additional information: | Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
Keywords: | Explainable AI (XAI), Trust, Acceptance of AI, Understandability |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Education UCL > Provost and Vice Provost Offices > School of Education > UCL Institute of Education UCL > Provost and Vice Provost Offices > School of Education > UCL Institute of Education > IOE - Culture, Communication and Media |
URI: | https://discovery.ucl.ac.uk/id/eprint/10210025 |
Archive Staff Only
![]() |
View Item |