Zhou, Q;
Suraworachet, W;
Cukurova, M;
(2023)
Detecting non-verbal speech and gaze behaviours with multimodal data and computer vision to interpret effective collaborative learning interactions.
Education and Information Technologies
10.1007/s10639-023-12315-1.
(In press).
Preview |
Text
s10639-023-12315-1.pdf - Published Version Download (1MB) | Preview |
Abstract
Collaboration is argued to be an important skill, not only in schools and higher education contexts but also in the workspace and other aspects of life. However, simply asking students to work together as a group on a task does not guarantee success in collaboration. Effective collaborative learning requires meaningful interactions among individuals in a group. Recent advances in multimodal data collection tools and AI provide unique opportunities to analyze, model and support these interactions. This study proposes an original method to identify group interactions in real-world collaborative learning activities and investigates the variations in interactions of groups with different collaborative learning outcomes. The study was conducted in a 10-week long post-graduate course involving 34 students with data collected from groups’ weekly collaborative learning interactions lasting ~ 60 min per session. The results showed that groups with different levels of shared understanding exhibit significant differences in time spent and maximum duration of referring and following behaviours. Further analysis using process mining techniques revealed that groups with different outcomes exhibit different patterns of group interactions. A loop between students’ referring and following behaviours and resource management behaviours was identified in groups with better collaborative learning outcomes. The study indicates that the nonverbal behaviours studied here, which can be auto-detected with advanced computer vision techniques and multimodal data, have the potential to distinguish groups with different collaborative learning outcomes. Insights generated can also support the practice of collaborative learning for learners and educators. Further research should explore the cross-context validity of the proposed distinctions and explore the approach’s potential to be developed as a real-world, real-time support system for collaborative learning.
Type: | Article |
---|---|
Title: | Detecting non-verbal speech and gaze behaviours with multimodal data and computer vision to interpret effective collaborative learning interactions |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1007/s10639-023-12315-1 |
Publisher version: | https://doi.org/10.1007/s10639-023-12315-1 |
Language: | English |
Additional information: | This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
Keywords: | Learning Analytics, Collaborative Learning, Process Mining |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Education UCL > Provost and Vice Provost Offices > School of Education > UCL Institute of Education UCL > Provost and Vice Provost Offices > School of Education > UCL Institute of Education > IOE - Culture, Communication and Media |
URI: | https://discovery.ucl.ac.uk/id/eprint/10183314 |
Archive Staff Only
![]() |
View Item |