Emotion rating from short blog texts

  • Authors:
  • Alastair J. Gill;Darren Gergle;Robert M. French;Jon Oberlander

  • Affiliations:
  • Northwestern University, Evanston, IL, USA;Northwestern University, Evanston, IL, USA;University of Burgundy, Dijon, France;University of Edinburgh, Edinburgh, United Kingdom

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

Being able to automatically perceive a variety of emotions from text alone has potentially important applications in CMC and HCI that range from identifying mood from online posts to enabling dynamically adaptive interfaces. However, such ability has not been proven in human raters or computational systems. Here we examine the ability of naive raters of emotion to detect one of eight emotional categories from 50 and 200 word samples of real blog text. Using expert raters as a 'gold standard', naive-expert rater agreement increased with longer texts, and was high for ratings of joy, disgust, anger and anticipation, but low for acceptance and 'neutral' texts. We discuss these findings in light of theories of CMC and potential applications in HCI.