Robust visual reranking via sparsity and ranking constraints

  • Authors:
  • Nobuyuki Morioka;Jingdong Wang

  • Affiliations:
  • The University of New South Wales & NICTA, Sydney, Australia;Microsoft Research Asia, Beijing, China

  • Venue:
  • MM '11 Proceedings of the 19th ACM international conference on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual reranking has become a widely-accepted method to improve traditional text-based image search engines. Its basic principle is that visually similar images should have similar ranking scores. While existing methods are different in specifics, almost all of them are based on explicit or implicit pseudo-relevance feedback (PRF). Explicit PRF-based approaches, including classification-based and clustering-based reranking, suffer from the difficulty of selecting reliable positive and negative samples. Implicit PRF-based approaches, such as graph-based and Bayesian visual reranking, deal with such unreliability by making use of the initial ranking in a soft manner, but have limited capability of promoting relevant images and lowering down irrelevant images. In this paper, we propose l1 square loss optimization based on sparsity and ranking constraints to detect confident samples which are most likely to be relevant to a query. Based on the discovered confident samples, we present an adaptive kernel-based scheme to rerank the images. The success of our proposed method comes from another important observation that irrelevant images, whether initially positioned at the top or bottom, are usually less-popular and more diverse than relevant images. Therefore, it is robust against outlier images and suitable when relevant images are multi-modally distributed. The experimental results demonstrate significant improvement of our method over several existing reranking approaches on both MSRA-MM V1.0 and Web Queries datasets.