Automatic detection of learner's affect from conversational cues

  • Authors:
  • Sidney K. D'Mello;Scotty D. Craig;Amy Witherspoon;Bethany Mcdaniel;Arthur Graesser

  • Affiliations:
  • Department of Computer Science, The University of Memphis, Memphis, USA 38152;Learning Research and Development Center, University of Pittsburgh, Pittsburgh, USA 15260;Department of Psychology, The University of Memphis, Memphis, USA 38152;Department of Psychology, The University of Memphis, Memphis, USA 38152;Department of Psychology, The University of Memphis, Memphis, USA 38152

  • Venue:
  • User Modeling and User-Adapted Interaction
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We explored the reliability of detecting a learner's affect from conversational features extracted from interactions with AutoTutor, an intelligent tutoring system (ITS) that helps students learn by holding a conversation in natural language. Training data were collected in a learning session with AutoTutor, after which the affective states of the learner were rated by the learner, a peer, and two trained judges. Inter-rater reliability scores indicated that the classifications of the trained judges were more reliable than the novice judges. Seven data sets that temporally integrated the affective judgments with the dialogue features of each learner were constructed. The first four datasets corresponded to the judgments of the learner, a peer, and two trained judges, while the remaining three data sets combined judgments of two or more raters. Multiple regression analyses confirmed the hypothesis that dialogue features could significantly predict the affective states of boredom, confusion, flow, and frustration. Machine learning experiments indicated that standard classifiers were moderately successful in discriminating the affective states of boredom, confusion, flow, frustration, and neutral, yielding a peak accuracy of 42% with neutral (chance = 20%) and 54% without neutral (chance = 25%). Individual detections of boredom, confusion, flow, and frustration, when contrasted with neutral affect, had maximum accuracies of 69, 68, 71, and 78%, respectively (chance = 50%). The classifiers that operated on the emotion judgments of the trained judges and combined models outperformed those based on judgments of the novices (i.e., the self and peer). Follow-up classification analyses that assessed the degree to which machine-generated affect labels correlated with affect judgments provided by humans revealed that human-machine agreement was on par with novice judges (self and peer) but quantitatively lower than trained judges. We discuss the prospects of extending AutoTutor into an affect-sensing ITS.