Mind and Body: Dialogue and Posture for Affect Detection in Learning Environments

  • Authors:
  • Sidney D'mello;Arthur Graesser

  • Affiliations:
  • Institute for Intelligent Systems The University of Memphis, 365 Innovation Drive, University of Memphis, Memphis, TN, 38152, USA;Institute for Intelligent Systems The University of Memphis, 365 Innovation Drive, University of Memphis, Memphis, TN, 38152, USA

  • Venue:
  • Proceedings of the 2007 conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We investigated the potential of automatic detection of a learner's affective states from posture patterns and dialogue features obtained from an interaction with AutoTutor, an intelligent tutoring system with conversational dialogue. Training and validation data were collected from the sensors in a learning session with AutoTutor, after which the affective states of the learner were rated by the learner, a peer, and two trained judges. Machine learning experiments with several standard classifiers indicated that the dialogue and posture features could individually discriminate between the affective states of boredom, confusion, flow (engagement), and frustration. Our results also indicate that a combination of the dialogue and posture features does improve classification accuracy. However, the incremental gains associated with the combination of the two sensors were not sufficient to exhibit superadditivity (i.e., performance superior to an additive combination of individual channel). Instead, the combination of posture and dialogue reflected a modest amount of redundancy among these channels.