%X We present an automated technique for computing a map between two genusâ€zero shapes, which matches semantically corresponding regions to one another. Lack of annotated data prohibits direct inference of 3D semantic priors; instead, current stateâ€ofâ€theâ€art methods predominantly optimize geometric properties or require varying amounts of manual annotation. To overcome the lack of annotated training data, we distill semantic matches from preâ€trained vision models: our method renders the pair of untextured 3D shapes from multiple viewpoints; the resulting renders are then fed into an offâ€theâ€shelf imageâ€matching strategy that leverages a preâ€trained visual model to produce feature points. This yields semantic correspondences, which are projected back to the 3D shapes, producing a raw matching that is inaccurate and inconsistent across different viewpoints. These correspondences are refined and distilled into an interâ€surface map by a dedicated optimization scheme, which promotes bijectivity and continuity of the output map. We illustrate that our approach can generate semantic surfaceâ€toâ€surface maps, eliminating manual annotations or any 3D training data requirement. Furthermore, it proves effective in scenarios with high semantic complexity, where objects are nonâ€isometrically related, as well as in situations where they are nearly isometric. %O This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. %I Wiley %J Computer Graphics Forum %K CCS Concepts, Computing methodologies, Shape analysis, Mesh geometry models, Feature selection %L discovery10191492 %D 2024 %T Neural Semantic Surface Maps %V 43 %A Luca Morreale %A Noam Aigerman %A Vladimir G Kim %A Niloy J Mitra %N 2