UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

C-XAI: A conceptual framework for designing XAI tools that support trust calibration

Naiseh, Mohammad; Simkute, Auste; Zieni, Baraa; Jiang, Nan; Ali, Raian; (2024) C-XAI: A conceptual framework for designing XAI tools that support trust calibration. Journal of Responsible Technology , 17 , Article 100076. 10.1016/j.jrt.2024.100076. Green open access

[thumbnail of 1-s2.0-S2666659624000027-main.pdf]
Preview
Text
1-s2.0-S2666659624000027-main.pdf - Other

Download (2MB) | Preview

Abstract

Recent advancements in machine learning have spurred an increased integration of AI in critical sectors such as healthcare and criminal justice. The ethical and legal concerns surrounding fully autonomous AI highlight the importance of combining human oversight with AI to elevate decision-making quality. However, trust calibration errors in human-AI collaboration, encompassing instances of over-trust or under-trust in AI recommendations, pose challenges to overall performance. Addressing trust calibration in the design process is essential, and eXplainable AI (XAI) emerges as a valuable tool by providing transparent AI explanations. This paper introduces Calibrated-XAI (C-XAI), a participatory design framework specifically crafted to tackle both technical and human factors in the creation of XAI interfaces geared towards trust calibration in Human-AI collaboration. The primary objective of the C-XAI framework is to assist designers of XAI interfaces in minimising trust calibration errors at the design level. This is achieved through the adoption of a participatory design approach, which includes providing templates, guidance, and involving diverse stakeholders in the design process. The efficacy of C-XAI is evaluated through a two-stage evaluation study, demonstrating its potential to aid designers in constructing user interfaces with trust calibration in mind. Through this work, we aspire to offer systematic guidance to practitioners, fostering a responsible approach to eXplainable AI at the user interface level.

Type: Article
Title: C-XAI: A conceptual framework for designing XAI tools that support trust calibration
Open access status: An open access version is available from UCL Discovery
DOI: 10.1016/j.jrt.2024.100076
Publisher version: http://dx.doi.org/10.1016/j.jrt.2024.100076
Language: English
Additional information: © 2024 The Author(s). Published by Elsevier Ltd on behalf of ORBIT under a Creative Commons license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Keywords: Explainable ai, Human-centred design, Participatory design, Human-AI teaming
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Electronic and Electrical Eng
URI: https://discovery.ucl.ac.uk/id/eprint/10189099
Downloads since deposit
15Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item