Analysis of a very large web search engine query log
ACM SIGIR Forum
User Modeling and User-Adapted Interaction
Vocal communication of emotion: a review of research paradigms
Speech Communication - Special issue on speech and emotion
Stuff I've seen: a system for personal information retrieval and re-use
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Extracting information about emotions in films
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
A survey on the use of relevance feedback for information access systems
The Knowledge Engineering Review
Content-based multimedia information retrieval: State of the art and challenges
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Automatic discrimination between laughter and speech
Speech Communication
Why we tag: motivations for annotation in mobile and online media
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
What drives content tagging: the case of photos on Flickr
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Tracking the Visual Focus of Attention for a Varying Number of Wandering People
IEEE Transactions on Pattern Analysis and Machine Intelligence
Affective feedback: an investigation into the role of emotions in the information seeking process
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Audiovisual laughter detection based on temporal features
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Affective ranking of movie scenes using physiological signals and content analysis
MS '08 Proceedings of the 2nd ACM workshop on Multimedia semantics
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Social signal processing: Survey of an emerging domain
Image and Vision Computing
Image and Vision Computing
Affective video content representation and modeling
IEEE Transactions on Multimedia
Recognition of facial expressions and measurement of levels of interest from video
IEEE Transactions on Multimedia
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Fusion of facial expressions and EEG for implicit affective tagging
Image and Vision Computing
Hi-index | 0.00 |
This paper provides a general introduction to the concept of Implicit Human-Centered Tagging (IHCT) - the automatic extraction of tags from nonverbal behavioral feedback of media users. The main idea behind IHCT is that nonverbal behaviors displayed when interacting with multimedia data (e.g., facial expressions, head nods, etc.) provide information useful for improving the tag sets associated with the data. As such behaviors are displayed naturally and spontaneously, no effort is required from the users, and this is why the resulting tagging process is said to be "implicit". Tags obtained through IHCT are expected to be more robust than tags associated with the data explicitly, at least in terms of: generality (they make sense to everybody) and statistical reliability (all tags will be sufficiently represented). The paper discusses these issues in detail and provides an overview of pioneering efforts in the field.