Contextual image annotation via projection and quantum theory inspired measurement for integration of text and visual features

  • Authors:
  • Leszek Kaliciak;Jun Wang;Dawei Song;Peng Zhang;Yuexian Hou

  • Affiliations:
  • The Robert Gordon University, Aberdeen, UK;The Robert Gordon University, Aberdeen, UK;The Robert Gordon University, Aberdeen, UK;The Robert Gordon University, Aberdeen, UK;Tianjin University, Tianjin, China

  • Venue:
  • QI'11 Proceedings of the 5th international conference on Quantum interaction
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimedia information retrieval suffers from the semantic gap, a difference between human perception and machine representation of images. In order to reduce the gap, a quantum theory inspired theoretical framework for integration of text and visual features has been proposed. This article is a followup work on this model. Previously, two relatively straightforward statistical approaches for making associations between dimensions of both feature spaces were employed, but with unsatisfactory results. In this paper, we propose to alleviate the problem regarding unannotated images by projecting them onto subspaces representing visual context and by incorporating a quantum-like measurement. The proposed principled approach extends the traditional vector space model (VSM) and seamlessly integrates with the tensor-based framework. Here, we experimentally test the novel association methods in a small-scale experiment.