Context-based support vector machines for interconnected image annotation

  • Authors:
  • Hichem Sahbi;Xi Li

  • Affiliations:
  • CNRS Telecom ParisTech, Paris, France;CNRS Telecom ParisTech, Paris, France and School of Computer Science, The University of Adelaide, Australia and NLPR, CASIA, Beijing, China

  • Venue:
  • ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

We introduce in this paper a novel image annotation approach based on support vector machines (SVMs) and a new class of kernels referred to as context-dependent. The method goes beyond the naive use of the intrinsic low level features (such as color, texture, shape, etc.) and context-free kernels, in order to design a kernel function applicable to interconnected databases such as social networks. The main contribution of our method includes (i) a variational approach which helps designing this function using both intrinsic features and the underlying contextual information resulting from different links and (ii) the proof of convergence of our kernel to a positive definite fixed-point, usable for SVM training and other kernel methods. When plugged in SVMs, our context-dependent kernel consistently improves the performance of image annotation, compared to context-free kernels, on hundreds of thousands of Flickr images.