?url_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.title=Unsupervised+Point+Cloud+Pre-training+via+Occlusion+Completion&rft.creator=Wang%2C+H&rft.creator=Liu%2C+Q&rft.creator=Yue%2C+X&rft.creator=Lasenby%2C+J&rft.creator=Kusner%2C+MJ&rft.description=We+describe+a+simple+pre-training+approach+for+point+clouds.+It+works+in+three+steps%3A+1.+Mask+all+points+occluded+in+a+camera+view%3B+2.+Learn+an+encoder-decoder+model+to+reconstruct+the+occluded+points%3B+3.+Use+the+encoder+weights+as+initialisation+for+downstream+point+cloud+tasks.+We+find+that+even+when+we+pre-train+on+a+single+dataset+(ModelNet40)%2C+this+method+improves+accuracy+across+different+datasets+and+encoders%2C+on+a+wide+range+of+downstream+tasks.+Specifically%2C+we+show+that+our+method+outperforms+previous+pre-training+methods+in+object+classification%2C+and+both+part-based+and+semantic+segmentation+tasks.+We+study+the+pre-trained+features+and+find+that+they+lead+to+wide+downstream+minima%2C+have+high+transformation+invariance%2C+and+have+activations+that+are+highly+correlated+with+part+labels.+Code+and+data+are+available+at%3A+https%3A%2F%2Fgithub.com%2Fhansen7%2FOcCo.&rft.publisher=Institute+of+Electrical+and+Electronics+Engineers+(IEEE)&rft.date=2021&rft.type=Proceedings+paper&rft.language=eng&rft.source=+++++In%3A++Proceedings+of+the+IEEE+International+Conference+on+Computer+Vision.++(pp.+pp.+9762-9772).++Institute+of+Electrical+and+Electronics+Engineers+(IEEE)+(2021)+++++&rft.format=text&rft.identifier=https%3A%2F%2Fdiscovery.ucl.ac.uk%2Fid%2Feprint%2F10155879%2F1%2FWang_Unsupervised_Point_Cloud_Pre-Training_via_Occlusion_Completion_ICCV_2021_paper.pdf&rft.identifier=https%3A%2F%2Fdiscovery.ucl.ac.uk%2Fid%2Feprint%2F10155879%2F&rft.rights=open