Associating textual features with visual ones to improve affective image classification

  • Authors:
  • Ningning Liu;Emmanuel Dellandréa;Bruno Tellez;Liming Chen

  • Affiliations:
  • Université de Lyon, CNRS, Ecole Centrale de Lyon, LIRIS, UMR5205, France;Université de Lyon, CNRS, Ecole Centrale de Lyon, LIRIS, UMR5205, France;Université de Lyon, CNRS, Université Lyon 1, LIRIS, UMR5205, France;Université de Lyon, CNRS, Ecole Centrale de Lyon, LIRIS, UMR5205, France

  • Venue:
  • ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many images carry a strong emotional semantic. These last years, some investigations have been driven to automatically identify induced emotions that may arise in viewers when looking at images, based on low-level image properties. Since these features can only catch the image atmosphere, they may fail when the emotional semantic is carried by objects. Therefore additional information is needed, and we propose in this paper to make use of textual information describing the image, such as tags. Thus, we have developed two textual features to catch the text emotional meaning: one is based on the semantic distance matrix between the text and an emotional dictionary, and the other one carries the valence and arousal meanings of words. Experiments have been driven on two datasets to evaluate visual and textual features and their fusion. The results have shown that our textual features can improve the classification accuracy of affective images.