?url_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.title=Reinforcement+Learning-based+Grasping+via+One-Shot+Affordance+Localization+and+Zero-Shot+Contrastive+Language%E2%80%93Image+Learning&rft.creator=Long%2C+Xiang&rft.creator=Beddow%2C+Luke&rft.creator=Hadjivelichkov%2C+Denis&rft.creator=Delfaki%2C+Andromachi+Maria&rft.creator=Wurdemann%2C+Helge&rft.creator=Kanoulas%2C+Dimitrios&rft.description=We+present+a+novel+robotic+grasping+system+using%0D%0Aa+caging-style+gripper%2C+that+combines+one-shot+affordance+localization+and+zero-shot+object+identification.+We+demonstrate+an%0D%0Aintegrated+system+requiring+minimal+prior+knowledge%2C+focusing%0D%0Aon+flexible+few-shot+object+agnostic+approaches.+For+grasping%0D%0Aa+novel+target+object%2C+we+use+as+input+the+color+and+depth%0D%0Aof+the+scene%2C+an+image+of+an+object+affordance+similar+to+the%0D%0Atarget+object%2C+and+an+up+to+three-word+text+prompt+describing%0D%0Athe+target+object.+We+demonstrate+the+system+using+real-world%0D%0Agrasping+of+objects+from+the+YCB+benchmark+set%2C+with+four%0D%0Adistractor+objects+cluttering+the+scene.+Overall%2C+our+pipeline%0D%0Ahas+a+success+rate+of+the+affordance+localization+of+96%25%2C+object%0D%0Aidentification+of+62.5%25%2C+and+grasping+of+72%25.+Videos+are+on%0D%0Athe+project+website%3A+https%3A%2F%2Fsites.google.com%2Fview%2F%0D%0Arl-affcorrs-grasp.&rft.subject=Location+awareness%2C%0D%0AAffordances%2C%0D%0APipelines%2C%0D%0AGrasping%2C%0D%0ASystem+integration%2C%0D%0ARobots%2C%0D%0AVideos&rft.publisher=IEEE&rft.date=2024-01-08&rft.type=Proceedings+paper&rft.language=eng&rft.source=+++++In%3A++Proceedings+of+the+2024+IEEE%2FSICE+International+Symposium+on+System+Integration+(SII).++++IEEE%3A+Ha+Long%2C+Vietnam.+(2024)+++++&rft.format=text&rft.identifier=https%3A%2F%2Fdiscovery.ucl.ac.uk%2Fid%2Feprint%2F10178561%2F1%2Fsii_2024_xiang.pdf&rft.identifier=https%3A%2F%2Fdiscovery.ucl.ac.uk%2Fid%2Feprint%2F10178561%2F&rft.rights=open