Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions

  • Authors:
  • Khiet P. Truong;David A. van Leeuwen;Mark A. Neerincx

  • Affiliations:
  • TNO Human Factors, Dept. of Human Interfaces, Soesterberg, The Netherlands;TNO Human Factors, Dept. of Human Interfaces, Soesterberg, The Netherlands;TNO Human Factors, Dept. of Human Interfaces, Soesterberg, The Netherlands

  • Venue:
  • FAC'07 Proceedings of the 3rd international conference on Foundations of augmented cognition
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system evaluation and emotion annotation that one is most likely to encounter in emotion recognition research. Further, we identify some of the possible applications for emotion recognition such as health monitoring or e-learning systems. Finally, we will discuss the growing need for developing agreed standards in automatic emotion recognition research.