eprintid: 10133003
rev_number: 16
eprint_status: archive
userid: 608
dir: disk0/10/13/30/03
datestamp: 2021-08-17 12:00:29
lastmod: 2021-10-11 22:20:37
status_changed: 2021-08-17 12:00:29
type: article
metadata_visibility: show
creators_name: Pachtrachai, K
creators_name: Vasconcelos, F
creators_name: Edwards, P
creators_name: Stoyanov, D
title: Learning to Calibrate - Estimating the Hand-eye Transformation without Calibration Objects
ispublished: pub
divisions: UCL
divisions: B04
divisions: C05
divisions: F48
note: This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions.
abstract: Hand-eye calibration is a method to determine the transformation linking between the robot and camera coordinate systems. Conventional calibration algorithms use a calibration grid to determine camera poses, corresponding to the robot poses, both of which are used in the main calibration procedure. Although such methods yield good calibration accuracy and are suitable for offline applications, they are not applicable in a dynamic environment such as robotic-assisted minimally invasive surgery (RMIS) because changes in the setup can be disruptive and time-consuming to the workflow as it requires yet another calibration procedure. In this paper, we propose a neural network-based hand-eye calibration method that does not require camera poses from a calibration grid but only uses the motion from surgical instruments in a camera frame and their corresponding robot poses as input to recover the hand-eye matrix. The advantages of using neural network are that the method is not limited by a single rigid transformation alignment and can learn dynamic changes correlated with kinematics and tool motion/interactions. Its loss function is derived from the original hand-eye transformation, the re-projection error and also the pose error in comparison to the remote centre of motion. The proposed method is validated with data from da Vinci Si and the results indicate that the designed network architecture can extract the relevant information and estimate the hand-eye matrix. Unlike the conventional hand-eye approaches, it does not require camera pose estimations which significantly simplifies the hand-eye problem in RMIS context as updating the hand-eye relationship can be done with a trained network and sequence of images. This introduces a potential of creating a hand-eye calibration
date: 2021-10-01
date_type: published
official_url: https://doi.org/10.1109/LRA.2021.3098942
oa_status: green
full_text_type: other
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 1882223
doi: 10.1109/LRA.2021.3098942
lyricists_name: Pachtrachai, Krittin
lyricists_name: Porto Guerra E Vasconcelos, Francisco
lyricists_name: Stoyanov, Danail
lyricists_id: KPACH34
lyricists_id: FVASC02
lyricists_id: DSTOY26
actors_name: Pachtrachai, Krittin
actors_id: KPACH34
actors_role: owner
full_text_status: public
publication: IEEE Robotics and Automation Letters
volume: 6
number: 4
pagerange: 7309-7316
citation:        Pachtrachai, K;    Vasconcelos, F;    Edwards, P;    Stoyanov, D;      (2021)    Learning to Calibrate - Estimating the Hand-eye Transformation without Calibration Objects.                   IEEE Robotics and Automation Letters , 6  (4)   pp. 7309-7316.    10.1109/LRA.2021.3098942 <https://doi.org/10.1109/LRA.2021.3098942>.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10133003/1/root_compressed.pdf