Review of the State of the Art in Semantic Scene Classification
Review of the State of the Art in Semantic Scene Classification
Large-Scale Concept Ontology for Multimedia
IEEE MultiMedia
Why we tag: motivations for annotation in mobile and online media
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Supervised Learning of Semantic Classes for Image Annotation and Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computer
ICSC '07 Proceedings of the International Conference on Semantic Computing
Equivalent key frames selection based on iso-content principles
IEEE Transactions on Circuits and Systems for Video Technology
MINMAX optimal video summarization
IEEE Transactions on Circuits and Systems for Video Technology
Semantic Home Photo Categorization
IEEE Transactions on Circuits and Systems for Video Technology
Enriching and localizing semantic tags in internet videos
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Hi-index | 0.00 |
The increasing popularity of user-generated content (UGC) requires effective annotation techniques in order to facilitate precise content search and retrieval. In this paper, we propose a new approach for the semantic annotation of personal video content, taking advantage of user-contributed tags available in an image folksonomy. Video shots and folksonomy images are first represented by a semantic vector. Next, the semantic vectors are used to measure the semantic similarity between each video shot and the folksonomy images. Tags assigned to semantically similar folksonomy images are then used to annotate the video shots. To verify the effectiveness of the proposed annotation method, experiments were performed with video sequences retrieved from YouTube and images downloaded from Flickr. Our experimental results demonstrate that the proposed method is able to successfully annotate personal video content with user-contributed tags retrieved from an image folksonomy. In addition, the size of our tag vocabulary is significantly higher than the size of the tag vocabulary used by conventional annotation methods.