Kokkinara, E; Oyekoya, O; Steed, A; (2011) Modelling selective visual attention for autonomous virtual characters. COMPUT ANIMAT VIRT W , 22 (4) 361 - 369. 10.1002/cav.425.
Full text not available from this repository.
Autonomous virtual characters (AVCs) are becoming more prevalent both for real-time interaction and also as digital actors in film and TV production. AVCs require believable virtual human animations, accompanied by natural attention generation, and thus the software that controls the AVCs needs to model when and how to interact with the objects and other characters that exist in the virtual environment. This paper models automatic attention behaviour using a saliency model that generates plausible targets for combined gaze and head motions. The model was compared with the default behaviour of the Second Life (SL) system in an object observation scenario while it was compared with real actors' behaviour in a conversation scenario. Results from a study run within the SL system demonstrate a promising attention model that is not just believable and realistic but also adaptable to varying task, without any prior knowledge of the virtual scene. Copyright (C) 2011 John Wiley & Sons, Ltd.
|Title:||Modelling selective visual attention for autonomous virtual characters|
|Keywords:||animation, attention, autonomous virtual characters, EYE, ANIMATION, BEHAVIORS|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science > Computer Science|
Archive Staff Only: edit this record