eprintid: 10197049
rev_number: 7
eprint_status: archive
userid: 699
dir: disk0/10/19/70/49
datestamp: 2024-09-17 10:51:02
lastmod: 2024-09-17 10:51:02
status_changed: 2024-09-17 10:51:02
type: proceedings_section
metadata_visibility: show
sword_depositor: 699
creators_name: Chen, K
creators_name: Du, Y
creators_name: You, T
creators_name: Islam, M
creators_name: Guo, Z
creators_name: Jin, Y
creators_name: Chen, G
creators_name: Heng, PA
title: LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic Surgery
ispublished: pub
divisions: UCL
divisions: B04
divisions: F42
keywords: Continuing education, 
Adaptation models, 
Visualization, 
Instruments, 
Large language models, 
Surgery, 
Transforms
note: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
abstract: Visual question answering (VQA) can be fundamentally crucial for promoting robotic-assisted surgical education. In practice, the needs of trainees are constantly evolving, such as learning more surgical types and adapting to new surgical instruments/techniques. Therefore, continually updating the VQA system by a sequential data stream from multiple resources is demanded in robotic surgery to address new tasks. In surgical scenarios, the privacy issue of patient data often restricts the availability of old data when updating the model, necessitating an exemplar-free continual learning (CL) setup. However, prior studies overlooked two vital problems of the surgical domain: i) large domain shifts from diverse surgical operations collected from multiple departments or clinical centers, and ii) severe data imbalance arising from the uneven presence of surgical instruments or activities during surgical procedures. This paper proposes to address these two problems with a multimodal large language model (LLM) and an adaptive weight assignment methodology. We first develop a new multi-teacher CL framework that leverages a multimodal LLM as the additional teacher. The strong generalization ability of the LLM can bridge the knowledge gap when domain shifts and data imbalances occur. We then put forth a novel data processing method that transforms complex LLM embeddings into logits compatible with our CL framework. We also design an adaptive weight assignment approach that balances the generalization ability of the LLM and the domain expertise of the old CL model. Finally, we construct a new dataset for surgical VQA tasks. Extensive experimental results demonstrate the superiority of our method to other advanced CL models.
date: 2024-08-08
date_type: published
publisher: IEEE
official_url: https://doi.org/10.1109/ICRA57147.2024.10610603
oa_status: green
full_text_type: other
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 2310867
doi: 10.1109/ICRA57147.2024.10610603
lyricists_name: Islam, Mobarakol
lyricists_id: MISLB53
actors_name: Islam, Mobarakol
actors_id: MISLB53
actors_role: owner
full_text_status: public
pres_type: paper
publication: Proceedings - IEEE International Conference on Robotics and Automation
place_of_pub: Yokohama, Japan
pagerange: 10772-10778
event_title: 2024 IEEE International Conference on Robotics and Automation (ICRA)
event_dates: 13 May 2024 - 17 May 2024
issn: 1050-4729
book_title: Proceedings - IEEE International Conference on Robotics and Automation
citation:        Chen, K;    Du, Y;    You, T;    Islam, M;    Guo, Z;    Jin, Y;    Chen, G;           Chen, K;  Du, Y;  You, T;  Islam, M;  Guo, Z;  Jin, Y;  Chen, G;  Heng, PA;   - view fewer <#>    (2024)    LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic Surgery.                     In:  Proceedings - IEEE International Conference on Robotics and Automation.  (pp. pp. 10772-10778).  IEEE: Yokohama, Japan.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10197049/1/LLM-Assisted.pdf