TY - JOUR ID - discovery10137040 VL - 1 AV - public A1 - Watson, T A1 - Halse, J A1 - Dula, G A1 - Soni, N A1 - Wu, Y A1 - Yasin, I IS - 11 UR - https://doi.org/10.1121/10.0007151 Y1 - 2021/11/11/ N2 - There is much interest in anthropometric-derived head-related transfer functions (HRTFs) for simulating audio for virtual-reality systems. Three-dimensional (3D) anthropometric measures can be measured directly from individuals, or indirectly simulated from two-dimensional (2D) pinna images. The latter often requires additional pinna, head and/or torso measures. This study investigated accuracy with which 3D depth information can be obtained solely from 2D pinna images using an unsupervised monocular-depth estimation neural-network model. Output was compared to depth information obtained from corresponding magnetic resonance imaging (MRI) head scans (ground truth). Results show that 3D depth estimates obtained from 2D pinna images corresponded closely with MRI head-scan depth values. TI - Correspondence between 3D ear depth information derived from 2D images and MRI: Use of a neural-network model JF - Journal of the Acoustical Society of America Express Letters N1 - © 2021 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). ER -