Short communication: Towards a universal detector by mining concepts with small semantic gaps

  • Authors:
  • Congyan Lang;Jiashi Feng;Yantao Zheng

  • Affiliations:
  • Department of Computer Science and Engineering, Beijing Jiaotong University, Beijing, China;Department of Electrical and Computer Engineering, National University of Singapore, Singapore;Institute for Infocomm Research, Singapore

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2012

Quantified Score

Hi-index 12.05

Visualization

Abstract

Can we have a universal detector that could visually recognize unseen objects with no training exemplars available? Such a detector is so desirable, as there are hundreds of thousands of object concepts in human vocabulary but few labeled image examples available. In this study, we attempt to build such a universal detector to predict concepts in the absence of training data. First, by considering both semantic relatedness and visual variance, we mine a set of realistic small-semantic-gap (SSG) concepts from a large-scale image corpus, i.e., ImageNet, which comprises 4961 concepts and nearly 4 million images. The discovered SSG concepts can be depicted well by visual models and their detectors can deliver reasonably satisfactory recognition accuracies. From these distinctive visual models, we then leverage the semantic ontology knowledge and co-occurrence statistics of concepts to extend visual recognition to unseen concepts. The rational is that object concepts generally co-occur in a real-life image. Their visual co-occurrence and semantic ontology provide the possibility for concept recognition to transcend the visual learning of image exemplars, and therefore, enable the detector to predict unseen realistic concepts without training samples. To the best of our knowledge, this work presents the first research attempting to substantiate the semantic gap measuring of a large amount of concepts and leverage visually learnable concepts to predicate those with no training images available. Testings on NUS-WIDE dataset demonstrate that the selected concepts with small semantic gaps can be well modeled and the prediction of unseen concepts delivers promising results with comparable accuracy to preliminary training-based methods.