Collaterally cued labelling framework underpinning semantic-level visual content descriptor

  • Authors:
  • Meng Zhu;Atta Badii

  • Affiliations:
  • IMSS Research Centre, Department of Computer Science, School of Systems Engineering, University of Reading, UK;IMSS Research Centre, Department of Computer Science, School of Systems Engineering, University of Reading, UK

  • Venue:
  • VISUAL'07 Proceedings of the 9th international conference on Advances in visual information systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.