Jones, G; Bielski, R; Julier, S; Berthouze, N; (2010) Towards a situated, multimodal interface for multiple UAV control. In: Proceedings - IEEE International Conference on Robotics and Automation. (pp. 1739 - 1744).
Full text not available from this repository.
Multiple autonomous Unmanned Aerial Vehicles (UAVs) can be used to complement human teams. This paper presents the results of an exploratory study to investigate gesture/ speech interfaces for interaction with robots in a situated manner and the development of three iterations of a prototype command set. A command set was compiled from observing users interacting with a simulated interface in a virtual reality environment. We discovered that users find this type of interface intuitive and their commands tend to naturally group into both 'High-Level' and 'Low-Level' instructions. However, as the robots moved further away, the loss of depth perception and direct feedback was inimical to the interaction. In a second experiment we found that using simple heads up display elements could mitigate these issues. ©2010 IEEE.
|Title:||Towards a situated, multimodal interface for multiple UAV control|
|UCL classification:||UCL > School of Life and Medical Sciences > Faculty of Brain Sciences > Psychology and Language Sciences (Division of) > UCL Interaction Centre|
UCL > School of BEAMS > Faculty of Engineering Science > Computer Science
Archive Staff Only: edit this record