Making sense of student use of nonverbal cues for intelligent tutoring systems

  • Authors:
  • Farhad Dadgostar;Hokyoung Ryu;Abdolhossein Sarrafzadeh;Scott Overmyer

  • Affiliations:
  • Massey University, Albany, Auckland, New Zealand;Massey University, Albany, Auckland, New Zealand;Massey University, Albany, Auckland, New Zealand;South Dakota State University

  • Venue:
  • OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many software systems would significantly improve performance if they could interpret the nonverbal cues in their user's interactions as humans normally do. Currently, Intelligent Tutoring Systems (ITSs) (and other software systems) are unable to use nonverbal cues to interpret student's responses to instructional material as can human tutors. We believe that this capability is essential to adapt teaching strategy to the needs of the learner. An experiment was performed aimed at identifying what kinds of gestures are being used by students in a human-to-human learning context. We have identified a range of gestures being used in one-to-one tutoring environments and a dependency of gesture use on students' skill level. As a result, we suggest how the student model in an ITS should reflect this dependency. These results are applicable to HCI in general.