Girges, C;
Spencer, J;
O'Brien, J;
(2015)
Categorizing identity from facial motion.
Quarterly Journal of Experimental Psychology
, 68
(9)
pp. 1832-1843.
10.1080/17470218.2014.993664.
Preview |
Text
Girges_Fulltext.pdf - Accepted Version Download (2MB) | Preview |
Abstract
Advances in marker-less motion capture technology now allow the accurate replication of facial motion and deformation in computer-generated imagery (CGI). A forced-choice discrimination paradigm using such CGI facial animations showed that human observers can categorize identity solely from facial motion cues. Animations were generated from motion captures acquired during natural speech, thus eliciting both rigid (head rotations and translations) and nonrigid (expressional changes) motion. To limit interferences from individual differences in facial form, all animations shared the same appearance. Observers were required to discriminate between different videos of facial motion and between the facial motions of different people. Performance was compared to the control condition of orientation-inverted facial motion. The results show that observers are able to make accurate discriminations of identity in the absence of all cues except facial motion. A clear inversion effect in both tasks provided consistency with previous studies, supporting the configural view of human face perception. The accuracy of this motion capture technology thus allowed stimuli to be generated that closely resembled real moving faces. Future studies may wish to implement such methodology when studying human face perception.
Archive Staff Only
View Item |