UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Affective state level recognition in naturalistic facial and vocal expressions

Meng, H; Bianchi-Berthouz, N; (2013) Affective state level recognition in naturalistic facial and vocal expressions. IEEE Transactions on Cybernetics , 44 (3) 315- 328. 10.1109/TCYB.2013.2253768. Green open access

[thumbnail of 06507321.pdf]
Preview
PDF
06507321.pdf

Download (1MB)

Abstract

Naturalistic affective expressions change at a rate much slower than the typical rate at which video or audio is recorded. This increases the probability that consecutive recorded instants of expressions represent the same affective content. In this paper, we exploit such a relationship to improve the recognition performance of continuous naturalistic affective expressions. Using datasets of naturalistic affective expressions (AVEC 2011 audio and video dataset, PAINFUL video dataset) continuously labeled over time and over different dimensions, we analyze the transitions between levels of those dimensions (e.g., transitions in pain intensity level). We use an information theory approach to show that the transitions occur very slowly and hence suggest modeling them as first-order Markov models. The dimension levels are considered to be the hidden states in the Hidden Markov Model (HMM) framework. Their discrete transition and emission matrices are trained by using the labels provided with the training set. The recognition problem is converted into a best path-finding problem to obtain the best hidden states sequence in HMMs. This is a key difference from previous use of HMMs as classifiers. Modeling of the transitions between dimension levels is integrated in a multistage approach, where the first level performs a mapping between the affective expression features and a soft decision value (e.g., an affective dimension level), and further classification stages are modeled as HMMs that refine that mapping by taking into account the temporal relationships between the output decision labels. The experimental results for each of the unimodal datasets show overall performance to be significantly above that of a standard classification system that does not take into account temporal relationships. In particular, the results on the AVEC 2011 audio dataset outperform all other systems presented at the international competition.

Type: Article
Title: Affective state level recognition in naturalistic facial and vocal expressions
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/TCYB.2013.2253768
Publisher version: http://dx.doi.org/10.1109/TCYB.2013.2253768
Language: English
Additional information: © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Keywords: Emotion recognition, Affective dimensions, Automatic emotion recognition, Facial expressions, Vocal expressions, HMM
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > UCL Interaction Centre
URI: https://discovery.ucl.ac.uk/id/eprint/1376761
Downloads since deposit
0Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item