Tagging video contents with positive/negative interest based on user's facial expression

  • Authors:
  • Masanori Miyahara;Masaki Aoki;Tetsuya Takiguchi;Yasuo Ariki

  • Affiliations:
  • Graduate School of Engineering, Kobe University, Kobe, Hyogo, Japan;Graduate School of Engineering, Kobe University, Kobe, Hyogo, Japan;Organization of Advanced Science and Technology, Kobe University, Kobe, Hyogo, Japan;Organization of Advanced Science and Technology, Kobe University, Kobe, Hyogo, Japan

  • Venue:
  • MMM'08 Proceedings of the 14th international conference on Advances in multimedia modeling
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, there are so many videos available for people to choose to watch. To solve this problem, we propose a tagging system for video content based on facial expression that can be used for recommendations based on video content. Viewer's face captured by a camera is extracted by Elastic Bunch Graph Matching, and the facial expression is recognized by Support Vector Machines. The facial expression is classified into Neutral, Positive, Negative and Rejective. Recognition results are recorded as "facial expression tags" in synchronization with video content. Experimental results achieved an averaged recall rate of 87.61%, and averaged precision rate of 88.03%.