Ranking feature sets for emotion models used in classroom based intelligent tutoring systems

  • Authors:
  • David G. Cooper;Kasia Muldner;Ivon Arroyo;Beverly Park Woolf;Winslow Burleson

  • Affiliations:
  • Department of Computer Science, University of Massachusetts, Amherst, MA;School of Computing and Informatics, Arizona State University, Tempe, AZ;Department of Computer Science, University of Massachusetts, Amherst, MA;Department of Computer Science, University of Massachusetts, Amherst, MA;School of Computing and Informatics, Arizona State University, Tempe, AZ

  • Venue:
  • UMAP'10 Proceedings of the 18th international conference on User Modeling, Adaptation, and Personalization
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent progress has been made by using sensors with Intelligent Tutoring Systems in classrooms in order to predict the affective state of students users If tutors are able to interpret sensor data with new students based on past experience, rather than having to be individually trained, then this will enable tutor developers to evaluate various methods of adapting to each student's affective state using consistent predictions In the past, our classifiers have predicted student emotions with an accuracy between 78% and 87% However, it is still unclear which sensors are best, and the educational technology community needs to know this to develop better than baseline classifiers, e.g ones that use only frequency of emotional occurrence to predict affective state This paper suggests a method to clarify classifier ranking for the purpose of affective models The method begins with a careful collection of a training and testing set, each from a separate population, and concludes with a non-parametric ranking of the trained classifiers on the testing set We illustrate this method with classifiers trained on data collected in the Fall of 2008 and tested on data collected in the Spring of 2009 Our results show that the classifiers for some affective states are significantly better than the baseline model; a validation analysis showed that some but not all classifier rankings generalize to new settings Overall, our analysis suggests that though there is some benefit gained from simple linear classifiers, more advanced methods or better features may be needed for better classification performance.