Combining classifiers in multimodal affect detection

  • Authors:
  • M. S. Hussain;Hamed Monkaresi;Rafael A. Calvo

  • Affiliations:
  • The University of Sydney, NSW, Australia;The University of Sydney, NSW, Australia;The University of Sydney, NSW, Australia

  • Venue:
  • AusDM '12 Proceedings of the Tenth Australasian Data Mining Conference - Volume 134
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Affect detection where users' mental states are automatically recognized from facial expressions, speech, physiology and other modalities, requires accurate machine learning and classification techniques. This paper investigates how combined classifiers, and their base classifiers, can be used in affect detection using features from facial video and multichannel physiology. The base classifiers evaluated include function, lazy and decision trees; and the combined where implemented as vote classifiers. Results indicate that the accuracy of affect detection can be improved using the combined classifiers especially by fusing the multimodal features. The base classifiers that are more useful for certain modalities have been identified. Vote classifiers also performed best for most of the individuals compared to the base classifiers.