TY  - GEN
N2  - We present a novel robotic grasping system using
a caging-style gripper, that combines one-shot affordance localization and zero-shot object identification. We demonstrate an
integrated system requiring minimal prior knowledge, focusing
on flexible few-shot object agnostic approaches. For grasping
a novel target object, we use as input the color and depth
of the scene, an image of an object affordance similar to the
target object, and an up to three-word text prompt describing
the target object. We demonstrate the system using real-world
grasping of objects from the YCB benchmark set, with four
distractor objects cluttering the scene. Overall, our pipeline
has a success rate of the affordance localization of 96%, object
identification of 62.5%, and grasping of 72%. Videos are on
the project website: https://sites.google.com/view/
rl-affcorrs-grasp.
ID  - discovery10178561
UR  - https://doi.org/10.1109/SII58957.2024.10417178
Y1  - 2024/01/08/
CY  - Ha Long, Vietnam
TI  - Reinforcement Learning-based Grasping via One-Shot Affordance Localization and Zero-Shot Contrastive Language?Image Learning
AV  - public
PB  - IEEE
A1  - Long, Xiang
A1  - Beddow, Luke
A1  - Hadjivelichkov, Denis
A1  - Delfaki, Andromachi Maria
A1  - Wurdemann, Helge
A1  - Kanoulas, Dimitrios
KW  - Location awareness
KW  - 
Affordances
KW  - 
Pipelines
KW  - 
Grasping
KW  - 
System integration
KW  - 
Robots
KW  - 
Videos
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher?s terms and conditions.
ER  -