Vicept: link visual features to concepts for large-scale image understanding

  • Authors:
  • Zhipeng Wu;Shuqiang Jiang;Liang Li;Peng Cui;Qingming Huang;Wen Gao

  • Affiliations:
  • Graduate University, Chinese Academy of Sciences, Beijing, China;Key Lab of Intell. Info. Process., Inst. of Comput. Tech., Chinese Academy of Sciences, Beijing, China;Key Lab of Intell. Info. Process., Inst. of Comput. Tech., Chinese Academy of Sciences, Beijing, China;Key Lab of Intell. Info. Process., Inst. of Comput. Tech., Chinese Academy of Sciences, Beijing, China;Graduate University, Chinese Academy of Sciences, Beijing, China;Institute for Digital Media, Peking University, Beijing, China

  • Venue:
  • Proceedings of the international conference on Multimedia
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

On noticing the paradox of visual polysemia and concept poly-morphism, this paper proposes a new perspective called "Vicept" to associate elementary visual features and cognitive concepts. Firstly, a carefully prepared large image dataset and associate concepts are established. Secondly, we extract local interest points as the ele-mentary visual features, cluster them into visual words, and use Fuzzy Concept Membership Updating (FCMU) to build the link between codebook and concept membership distributions. This bottommost feature is called "Vicept word". Then, the global level Vicept features are established to correlate concepts with (partial) images. Finally, we validate our Vicept approach and show its effectiveness in concept detection task. Our approach is independent of case-specific training data and thus can be extended to web-scale scenarios.