UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic Surgery

Chen, K; Du, Y; You, T; Islam, M; Guo, Z; Jin, Y; Chen, G; (2024) LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic Surgery. In: Proceedings - IEEE International Conference on Robotics and Automation. (pp. pp. 10772-10778). IEEE: Yokohama, Japan. Green open access

[thumbnail of LLM-Assisted.pdf]
Preview
Text
LLM-Assisted.pdf - Accepted Version

Download (1MB) | Preview

Abstract

Visual question answering (VQA) can be fundamentally crucial for promoting robotic-assisted surgical education. In practice, the needs of trainees are constantly evolving, such as learning more surgical types and adapting to new surgical instruments/techniques. Therefore, continually updating the VQA system by a sequential data stream from multiple resources is demanded in robotic surgery to address new tasks. In surgical scenarios, the privacy issue of patient data often restricts the availability of old data when updating the model, necessitating an exemplar-free continual learning (CL) setup. However, prior studies overlooked two vital problems of the surgical domain: i) large domain shifts from diverse surgical operations collected from multiple departments or clinical centers, and ii) severe data imbalance arising from the uneven presence of surgical instruments or activities during surgical procedures. This paper proposes to address these two problems with a multimodal large language model (LLM) and an adaptive weight assignment methodology. We first develop a new multi-teacher CL framework that leverages a multimodal LLM as the additional teacher. The strong generalization ability of the LLM can bridge the knowledge gap when domain shifts and data imbalances occur. We then put forth a novel data processing method that transforms complex LLM embeddings into logits compatible with our CL framework. We also design an adaptive weight assignment approach that balances the generalization ability of the LLM and the domain expertise of the old CL model. Finally, we construct a new dataset for surgical VQA tasks. Extensive experimental results demonstrate the superiority of our method to other advanced CL models.

Type: Proceedings paper
Title: LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic Surgery
Event: 2024 IEEE International Conference on Robotics and Automation (ICRA)
Dates: 13 May 2024 - 17 May 2024
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/ICRA57147.2024.10610603
Publisher version: https://doi.org/10.1109/ICRA57147.2024.10610603
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Continuing education, Adaptation models, Visualization, Instruments, Large language models, Surgery, Transforms
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Med Phys and Biomedical Eng
URI: https://discovery.ucl.ac.uk/id/eprint/10197049
Downloads since deposit
23Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item