Berisha, F and Johnston, A and McOwan, P (2007) Spatial location of critical facial motion information for PCA-based performance-driven mimicry. In: Journal of Vision. (pp. 495 - ?).
Full text not available from this repository.
Our ability to process faces is known to depend on the spatial location of visual facial information we receive. A good method for revealing such diagnostic facial information for different categorisation tasks is the bubbles method. We’ve applied it here to reveal diagnostic information for a performance-driven mimicry task carried out by a computer model of the face, built to a degree on biologically motivated principles. The face model was generated by vectorizing a sequence of images of a talking face, extracting motion fields via an optic flow algorithm and calculating a set of basis actions using principal component analysis (PCA). The standard bubbles technique revealed the areas around and including the mouth and eyes as the most important ones for our task. These regions overlapped with but were not identical to areas of maximum pixel-value variance. Visual inspection also showed that the PCA face model recovers aspects of expressions in those areas occluded in the driver sequence. Until now bubbles were only used as a human search for diagnostic features in faces. Here, a system using reconstruction fidelity as diagnostic criterion and indifferent to the content of the stimulus, mimics the behaviour of human observers in face discrimination tasks.
|Title:||Spatial location of critical facial motion information for PCA-based performance-driven mimicry|
|Keywords:||faces; face animation; principal component analysis; bubbles|
|UCL classification:||UCL > School of Life and Medical Sciences > Faculty of Brain Sciences > Psychology and Language Sciences (Division of) > Cognitive, Perceptual and Brain Sciences|
UCL > School of BEAMS > Faculty of Maths and Physical Sciences > CoMPLEX - Maths and Physics in the Life Sciences and Experimental Biology
Archive Staff Only: edit this record