Embedding spatial information into image content description for scene retrieval

  • Authors:
  • N. V. Hoàng;V. Gouet-Brunet;M. Rukoz;M. Manouvrier

  • Affiliations:
  • LAMSADE - University Paris-Dauphine, Pl. de Lattre de Tassigny, F75775 Paris Cedex 16, France and CEDRIC/CNAM - 292, rue Saint-Martin, F75141 Paris Cedex 03, France;CEDRIC/CNAM - 292, rue Saint-Martin, F75141 Paris Cedex 03, France;LAMSADE - University Paris-Dauphine, Pl. de Lattre de Tassigny, F75775 Paris Cedex 16, France and University Paris Ouest Nanterre La Défense, 200, av. République, F92001 Nanterre Cedex, ...;LAMSADE - University Paris-Dauphine, Pl. de Lattre de Tassigny, F75775 Paris Cedex 16, France

  • Venue:
  • Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

This article presents @D-TSR, an image content representation describing the spatial layout with triangular relationships of visual entities, which can be symbolic objects or low-level visual features. A semi-local implementation of @D-TSR is also proposed, making the description robust to viewpoint changes. We evaluate @D-TSR for image retrieval under the query-by-example paradigm, on contents represented with interest points in a bag-of-features model: it improves state-of-the-art techniques, in terms of retrieval quality as well as of execution time, and is scalable. Finally, its effectiveness is evaluated on a topical scenario dedicated to scene retrieval in datasets of city landmarks.