Sharma, Shivaang;
Aristidou, Angela;
(2025)
How Stakeholders Operationalize Responsible Artificial Intelligence (AI) in Data Sensitive Contexts.
MIS Quarterly Executive
, 24
(2)
, Article 4. 10.17705/2msqe.00114.
Preview |
Text
MISQE-2024-0050.R2_Proof_hi.pdf - Accepted Version Download (548kB) | Preview |
Abstract
Operationalizing the responsible use of AI in data-sensitive, multi-stakeholder contexts is challenging. We studied how six AI tools were operationalized in a humanitarian crisis context, which involved aid agency decision makers, private technology firms and vulnerable populations. From the insights gained, we identify five types of “AI responsibility rifts” (AIRRs - the differences in subjective expectations, value sand perceived impacts of stakeholders when operationalizing an AI tool in data-sensitive contexts). We propose the self-assessment SHARE framework to mitigate these rifts and provide recommendations for closing the identified gaps.
Type: | Article |
---|---|
Title: | How Stakeholders Operationalize Responsible Artificial Intelligence (AI) in Data Sensitive Contexts |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.17705/2msqe.00114 |
Publisher version: | https://aisel.aisnet.org/misqe/vol24/iss2/4/ |
Language: | English |
Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > UCL School of Management |
URI: | https://discovery.ucl.ac.uk/id/eprint/10205651 |
Archive Staff Only
![]() |
View Item |