Matrix analysis
Multimedia semantic indexing using model vectors
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 1
Correlative multilabel video annotation with temporal kernels
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Using Concept Recognition to Annotate a Video Collection
PReMI '09 Proceedings of the 3rd International Conference on Pattern Recognition and Machine Intelligence
Semantic video indexing by fusing explicit and implicit context spaces
Proceedings of the international conference on Multimedia
Ensemble approach based on conditional random field for multi-label image and video annotation
MM '11 Proceedings of the 19th ACM international conference on Multimedia
A fuzzy ontology: based framework for reasoning in visual video content analysis and indexing
Proceedings of the Eleventh International Workshop on Multimedia Data Mining
Temporal-Spatial refinements for video concept fusion
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part III
Hi-index | 0.00 |
Video annotation is a promising and essential step for content-based video search and retrieval. Most of the state-of-the-art video annotation approaches detect multiple semantic concepts in an isolated manner, which neglect the fact that video concepts are usually correlated in semantic nature. In this paper, we propose to refine video annotation by leveraging the pairwise concurrent relation among video concepts. Such concurrent relation is explicitly modeled by a concurrent matrix and then a propagation strategy is adopted to refine the annotations. Through spreading the scores of all related concepts to each other iteratively, the detection results approach stable and optimal. In contrast with existing concept fusion methods, the proposed approach is computationally more efficient and easy to implement, not requiring to construct any contextual model. Furthermore, we show its intuitive connection with the PageRank algorithm. We conduct the experiments on TRECVID 2005 corpus and report superior performance compared to existing key approaches.