Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
Semantics in Visual Information Retrieval
IEEE MultiMedia
Content-based image retrieval: approaches and trends of the new age
Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval
The Design of High-Level Features for Photo Quality Assessment
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
The MIR flickr retrieval evaluation
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
A new technique for combining multiple classifiers using the dempster-shafer theory of evidence
Journal of Artificial Intelligence Research
Affective image classification using features inspired by psychology and art theory
Proceedings of the international conference on Multimedia
Emotion semantics image retrieval: an brief overview
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
XCSF for prediction on emotion induced by image based on dimensional theory of emotion
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
Computer Vision and Image Understanding
Hi-index | 0.00 |
Many images carry a strong emotional semantic. These last years, some investigations have been driven to automatically identify induced emotions that may arise in viewers when looking at images, based on low-level image properties. Since these features can only catch the image atmosphere, they may fail when the emotional semantic is carried by objects. Therefore additional information is needed, and we propose in this paper to make use of textual information describing the image, such as tags. Thus, we have developed two textual features to catch the text emotional meaning: one is based on the semantic distance matrix between the text and an emotional dictionary, and the other one carries the valence and arousal meanings of words. Experiments have been driven on two datasets to evaluate visual and textual features and their fusion. The results have shown that our textual features can improve the classification accuracy of affective images.