Context dependent SVMs for interconnected image network annotation

  • Authors:
  • Hichem Sahbi;Xi Li

  • Affiliations:
  • CNRS TELECOM ParisTech, Paris, France;CNRS TELECOM ParisTech, Paris, France

  • Venue:
  • Proceedings of the international conference on Multimedia
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The exponential growth of interconnected networks, such as Flickr, currently makes them the standard way to share and explore data where users put contents and refer to others. These interconnections create valuable information in order to enhance the performance of many tasks in information retrieval including ranking and annotation. We introduce in this paper a novel image annotation framework based on support vector machines (SVMs) and a new class of kernels referred to as context-dependent. The method goes beyond the naive use of the intrinsic low level features (such as color, texture, shape, etc.) and context-free kernels, in order to design a kernel function applicable to interconnected databases such as social networks. The main contribution of our method includes a variational framework which helps designing this function using both intrinsic features and the underlying contextual information. This function also converges to a positive definite fixed-point, usable for SVM training and other kernel methods. When plugged in SVMs, our context-dependent kernel consistently improves the performance of image annotation, compared to context-free kernels, on hundreds of thousands of Flickr images.