The description logic handbook: theory, implementation, and applications
The description logic handbook: theory, implementation, and applications
A Semantic Web Primer
Emotional face expression profiles supported by virtual human ontology: Research Articles
Computer Animation and Virtual Worlds - CASA 2006
Semantic Multimedia and Ontologies: Theory and Applications
Semantic Multimedia and Ontologies: Theory and Applications
Towards semantic multimodal video annotation
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
Towards an RDF encoding of ConceptNet
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part III
EmotiNet: a knowledge base for emotion detection in text built on the appraisal theories
NLDB'11 Proceedings of the 16th international conference on Natural language processing and information systems
Sentic Computing for social media marketing
Multimedia Tools and Applications
Ontology-based semantic affective tagging
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
Sentimantics: conceptual spaces for lexical sentiment polarity representation with contextuality
WASSA '12 Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis
Towards IMACA: intelligent multimodal affective conversational agent
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
Editorial: Detecting implicit expressions of affect in text using EmotiNet and its extensions
Data & Knowledge Engineering
Hi-index | 0.00 |
A big issue in the task of annotating multimedia data about dialogs and associated gesture and emotional state is due to the great variety of intrinsically heterogeneous metadata and to the impossibility of a standardization of the used descriptor in particular for the emotional state of the subject. We propose to tackle this problem using the instruments and the vision offered by Semantic Web through the development of an ontology for human emotions that could be used in the annotation of emotion in multimedia data, supplying a structure that could grant at the same time flexibility and interoperability, allowing an effective sharing of the encoded annotations between different users.