Lu, Bo;
Li, Bin;
Chen, Wei;
Jin, Yueming;
Zhao, Zixu;
Dou, Qi;
Heng, Pheng-Ann;
(2021)
Toward Image-Guided Automated Suture Grasping Under Complex Environments: A Learning-Enabled and Optimization-Based Holistic Framework.
IEEE Transactions on Automation Science and Engineering
10.1109/tase.2021.3136185.
Preview |
Text
Toward_Image-Guided_Automated_Suture_Grasping_Under_Complex_Environments_A_Learning-Enabled_and_Optimization-Based_Holistic_Framework.pdf - Accepted Version Download (9MB) | Preview |
Abstract
To realize a higher-level autonomy of surgical knot tying in minimally invasive surgery (MIS), automated suture grasping, which bridges the suture stitching and looping procedures, is an important yet challenging task needs to be achieved. This paper presents a holistic framework with image-guided and automation techniques to robotize this operation even under complex environments. The whole task is initialized by suture segmentation, in which we propose a novel semi-supervised learning architecture featured with a suture-aware loss to pertinently learn its slender information using both annotated and unannotated data. With successful segmentation in stereo-camera, we develop a Sampling-based Sliding Pairing (SSP) algorithm to online optimize the suture's 3D shape. By jointly studying the robotic configuration and the suture's spatial characteristics, a target function is introduced to find the optimal grasping pose of the surgical tool with Remote Center of Motion (RCM) constraints. To compensate for inherent errors and practical uncertainties, a unified grasping strategy with a novel vision-based mechanism is introduced to autonomously accomplish this grasping task. Our framework is extensively evaluated from learning-based segmentation, 3D reconstruction, and image-guided grasping on the da Vinci Research Kit (dVRK) platform, where we achieve high performances and successful rates in perceptions and robotic manipulations. These results prove the feasibility of our approach in automating the suture grasping task, and this work fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying.
Type: | Article |
---|---|
Title: | Toward Image-Guided Automated Suture Grasping Under Complex Environments: A Learning-Enabled and Optimization-Based Holistic Framework |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1109/tase.2021.3136185 |
Publisher version: | https://doi.org/10.1109/TASE.2021.3136185 |
Language: | English |
Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
Keywords: | Grasping , Robots , Three-dimensional displays , Task analysis , Robot sensing systems , Image segmentation , Shape |
UCL classification: | UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science UCL > Provost and Vice Provost Offices > UCL BEAMS UCL |
URI: | https://discovery.ucl.ac.uk/id/eprint/10148753 |




Archive Staff Only
![]() |
View Item |