Towards IMACA: intelligent multimodal affective conversational agent

  • Authors:
  • Amir Hussain;Erik Cambria;Thomas Mazzocco;Marco Grassi;Qiu-Feng Wang;Tariq Durrani

  • Affiliations:
  • Dept. of Computing Science and Mathematics, University of Stirling, UK;Temasek Laboratories, National University of Singapore, Singapore;Dept. of Computing Science and Mathematics, University of Stirling, UK;Dept. of Information Engineering, Universitá Politecnica delle Marche, Italy;National Laboratory of Pattern Recognition, Chinese Academy of Sciences, P.R. China;Dept. of Electronic and Electrical Engineering, University of Strathclyde, UK

  • Venue:
  • ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

A key aspect when trying to achieve natural interaction in machines is multimodality. Besides verbal communication, in fact, humans interact also through many other channels, e.g., facial expressions, gestures, eye contact, posture, and voice tone. Such channels convey not only semantics, but also emotional cues that are essential for interpreting the message transmitted. The importance of the affective information and the capability of properly managing it, in fact, has been more and more understood as fundamental for the development of a new generation of emotion-aware applications for several scenarios like e-learning, e-health, and human-computer interaction. To this end, this work investigates the adoption of different paradigms in the fields of text, vocal, and video analysis, in order to lay the basis for the development of an intelligent multimodal affective conversational agent.