Video content representation on tiny devices.
Presented at: UNSPECIFIED.
The perceptual satisfaction of a user watching video on a tiny mobile device is constrained by the display capability and network bandwidth. To maximize the user's perceptual satisfaction in this constrained environment, we propose a new method to adaptively represent the video content in real-time on tiny devices according to the user's attention. In our framework, firstly, a sampling based dynamic attention model is proposed to obtain and maintain the user's attention in the video streams. Secondly, based on the most attended regions and sequences extracted, the attention based representation is introduced to achieve a higher perceptual satisfaction on a small display. Experiments with users show the effectiveness of our proposed method in a video surveillance application domain.
|Type:||Conference item (UNSPECIFIED)|
|Title:||Video content representation on tiny devices|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science
UCL > School of BEAMS > Faculty of Engineering Science > Computer Science
Archive Staff Only