MAUI: a multimodal affective user interface

  • Authors:
  • Christine L. Lisetti;Fatma Nasoz

  • Affiliations:
  • University of Central Florida, Orlando, FL;University of Central Florida, Orlando, FL

  • Venue:
  • Proceedings of the tenth ACM international conference on Multimedia
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human intelligence is being increasingly redefined to include the all-encompassing effect of emotions upon what used to be considered 'pure reason'. With the recent progress of research in computer vision, speech/prosody recognition, and bio-feedback, real-time recognition of affect will enhance human-computer interaction considerably, as well as assist further progress in the development of new emotion theories.In this article, we describe how affect, moods and emotions closely interact with cognition and how affect and emotion are the quintessential multimodal processes in humans. We then propose an adaptive system architecture designed to sense the user's emotional and affective states via three multimodal subsystems (V, K, A): namely (1) the Visual (from facial images and videos), (2) Kinesthetic (from autonomic nervous system (ANS) signals), and (3) Auditory (from speech). The results of the system sensing are then integrated into the multimodal perceived multimodal anthropomorphic interface agent then adapts its interface by responding most appropriately to the current emotional states of its user, and provides intelligent multi-modal feedback to the user.