Wong, Kester;
Bulathwela, Sahan;
Cukurova, Mutlu;
(2025)
Explainable Collaborative Problem Solving Diagnosis with BERT using SHAP and its Implications for Teacher Adoption.
In:
Proceedings of HEXED 2025: 2nd Human-Centric eXplainable AI in Education (HEXED).
HEXED Workshop: Palermo, Italy.
(In press).
Preview |
Text
2507.14584v1.pdf - Accepted Version Download (696kB) | Preview |
Abstract
The use of Bidirectional Encoder Representations from Transformers (BERT) model and its variants for classifying collaborative problem solving (CPS) has been extensively explored within the AI in Education community. However, limited attention has been given to understanding how individual tokenised words in the dataset contribute to the model’s classification decisions. Enhancing the explainability of BERT-based CPS diagnostics is essential to better inform end users such as teachers, thereby fostering greater trust and facilitating wider adoption in education. This study undertook a preliminary step towards model transparency and explainability by using SHapley Additive exPlanations (SHAP) to examine how different tokenised words in transcription data contributed to a BERT model’s classification of CPS processes. The findings suggested that well-performing classifications did not necessarily equate to a reasonable explanation for the classification decisions. Particular tokenised words were used frequently to affect classifications. The analysis also identified a spurious word, which contributed positively to the classification but was not semantically meaningful to the class. While such model transparency is unlikely to be useful to an end user to improve their practice, it can help them not to overrely on LLM diagnostics and ignore their human expertise. We conclude the workshop paper by noting that the extent to which the model appropriately uses the tokens for its classification is associated with the number of classes involved. It calls for an investigation into the exploration of ensemble model architectures and the involvement of human-AI complementarity for CPS diagnosis, since considerable human reasoning is still required for fine-grained discrimination of CPS subskills.
Type: | Proceedings paper |
---|---|
Title: | Explainable Collaborative Problem Solving Diagnosis with BERT using SHAP and its Implications for Teacher Adoption |
Event: | HEXED 2025: 2nd Human-Centric eXplainable AI in Education (HEXED) Workshop |
Location: | Palermo, Italy |
Open access status: | An open access version is available from UCL Discovery |
Publisher version: | https://hexed-workshop.github.io/ |
Language: | English |
Additional information: | Copyright © 2025. Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0) (https://creativecommons.org/licenses/by/4.0/deed.en). |
Keywords: | Collaborative Problem Solving, Explainable AI, Partition SHAP, Large Language Model |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Education UCL > Provost and Vice Provost Offices > School of Education > UCL Institute of Education UCL > Provost and Vice Provost Offices > School of Education > UCL Institute of Education > IOE - Culture, Communication and Media |
URI: | https://discovery.ucl.ac.uk/id/eprint/10212933 |
Archive Staff Only
![]() |
View Item |