The stanford mobile visual search data set

  • Authors:
  • Vijay R. Chandrasekhar;David M. Chen;Sam S. Tsai;Ngai-Man Cheung;Huizhong Chen;Gabriel Takacs;Yuriy Reznik;Ramakrishna Vedantham;Radek Grzeszczuk;Jeff Bach;Bernd Girod

  • Affiliations:
  • Stanford University, Stanford, CA, USA;Stanford University, Stanford, CA, USA;Stanford University, Stanford, CA, USA;Stanford University, Stanford, CA, USA;Stanford University, Stanford, CA, USA;Stanford University, Stanford, CA, USA;Qualcomm, Inc., San Diego, CA, USA;Nokia Research Center, Palo Alto, CA, USA;Nokia Research Center, Palo Alto, CA, USA;Navteq, Chicago, CA, USA;Stanford University, Stanford, CA, USA

  • Venue:
  • MMSys '11 Proceedings of the second annual ACM conference on Multimedia systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We survey popular data sets used in computer vision literature and point out their limitations for mobile visual search applications. To overcome many of the limitations, we propose the Stanford Mobile Visual Search data set. The data set contains camera-phone images of products, CDs, books, outdoor landmarks, business cards, text documents, museum paintings and video clips. The data set has several key characteristics lacking in existing data sets: rigid objects, widely varying lighting conditions, perspective distortion, foreground and background clutter, realistic ground-truth reference data, and query data collected from heterogeneous low and high-end camera phones. We hope that the data set will help push research forward in the field of mobile visual search.