Learning concepts by modeling relationships

  • Authors:
  • Yong Rui;Guo-Jun Qi

  • Affiliations:
  • Microsoft Corporation, Beijing, China;University of Science and Technology of China, Hefei, Anhui, China

  • Venue:
  • MCAM'07 Proceedings of the 2007 international conference on Multimedia content analysis and mining
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Supporting multimedia search has emerged as an important research topic. There are three paradigms on the research spectrum that ranges from the least automatic to the most automatic. On the far left end, there is the pure manual labeling paradigm that labels multimedia content, e.g., images and video clips, manually with text labels and then use text search to search multimedia content indirectly. On the far right end, there is the content-based search paradigm that can be fully automatic by using low-level features from multimedia analysis. In recent years, a third paradigm emerged which is in the middle: the annotation paradigm. Once the concept models are trained, this paradigm can automatically detect/annotate concepts in unseen multimedia content. This paper looks into this annotation paradigm. Specifically, this paper argues that within the annotation paradigm, the relationship-based annotation approach outperforms other existing annotation approaches, because individual concepts are considered jointly instead of independently. We use two examples to illustrate the argument. The first example is on image annotation and the second one is on video annotation. Experiments indeed show that relationship-based annotation approaches render superior performance.