On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors
SIAM Journal on Matrix Analysis and Applications
Multiple-Instance Learning for Natural Scene Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Image Categorization by Learning and Reasoning with Regions
The Journal of Machine Learning Research
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Multimedia semantic indexing using model vectors
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 1
The challenge problem for automated detection of 101 semantic concepts in multimedia
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Correlative multi-label video annotation
Proceedings of the 15th international conference on Multimedia
Hi-index | 0.00 |
Supporting multimedia search has emerged as an important research topic. There are three paradigms on the research spectrum that ranges from the least automatic to the most automatic. On the far left end, there is the pure manual labeling paradigm that labels multimedia content, e.g., images and video clips, manually with text labels and then use text search to search multimedia content indirectly. On the far right end, there is the content-based search paradigm that can be fully automatic by using low-level features from multimedia analysis. In recent years, a third paradigm emerged which is in the middle: the annotation paradigm. Once the concept models are trained, this paradigm can automatically detect/annotate concepts in unseen multimedia content. This paper looks into this annotation paradigm. Specifically, this paper argues that within the annotation paradigm, the relationship-based annotation approach outperforms other existing annotation approaches, because individual concepts are considered jointly instead of independently. We use two examples to illustrate the argument. The first example is on image annotation and the second one is on video annotation. Experiments indeed show that relationship-based annotation approaches render superior performance.