Learning to gesture: applying appropriate animations to spoken text

  • Authors:
  • Nathan Nichols;Jiahui Liu;Bryan Pardo;Kristian Hammond;Larry Birnbaum

  • Affiliations:
  • Northwestern University, Evanston, IL;Northwestern University, Evanston, IL;Northwestern University, Evanston, IL;Northwestern University, Evanston, IL;Northwestern University, Evanston, IL

  • Venue:
  • Proceedings of the 15th international conference on Multimedia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a machine learning system that learns to choose human gestures to accompany novel text. The system is trained on scripts comprised of speech and animations that were hand-coded by professional animators and shipped in video games. We treat this as a text-classification problem, classifying speech as corresponding with specific classes of gestures. We have built and tested two separate classifiers. The first is trained simply on the frequencies of different animations in the corpus. The second extracts text features from each script, and maps these features to the gestures that accompany the script. We have experimented with using a number of features of the text, including n-grams, emotional valence of the text, and parts-of-speech. Using a naïve Bayes classifier, the system learns to associate these features with appropriate classes of gestures. Once trained, the system can be given novel text for which it will attempt to assign appropriate gestures. We examine the performance of the two classifiers by using n-fold cross-validation over our training data, as well as two user studies of subjective evaluation of the results. Although there are many possible applications of automated gesture assignment, we hope to apply this technique to a system that produces an automated news show.