Contextual Kernel and Spectral Methods for Learning the Semantics of Images

  • Authors:
  • Zhiwu Lu;H. H.S. Ip; Yuxin Peng

  • Affiliations:
  • Inst. of Comput. Sci. & Technol., Peking Univ., Beijing, China;-;-

  • Venue:
  • IEEE Transactions on Image Processing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity between images. Specifically, we represent each image as a 2-D sequence of visual words and measure the similarity between two 2-D sequences using the shared occurrences of s -length 1-D subsequences by decomposing each 2-D sequence into two orthogonal 1-D sequences. Based on our proposed spatial string kernel, we further formulate automatic image annotation as a contextual keyword propagation problem, which can be solved very efficiently by linear programming. Unlike the traditional relevance models that treat each keyword independently, the proposed contextual kernel method for keyword propagation takes into account the semantic context of annotation keywords and propagates multiple keywords simultaneously. Significantly, this type of semantic context can also be incorporated into spectral embedding for refining the annotations of images predicted by keyword propagation. Experiments on three standard image datasets demonstrate that our contextual kernel and spectral methods can achieve significantly better results than the state of the art.