A Multi-View Embedding Space for Modeling Internet Images, Tags, and Their Semantics

  • Authors:
  • Yunchao Gong;Qifa Ke;Michael Isard;Svetlana Lazebnik

  • Affiliations:
  • Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, USA;Microsoft Research Silicon Valley, Mountain View, USA;Microsoft Research Silicon Valley, Mountain View, USA;Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, USA

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.