Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set

  • Authors:
  • Shuicheng Yan

  • Affiliations:
  • ECE Department, National University of Singapore

  • Venue:
  • CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we address a practical problem of cross-scenario clothing retrieval — given a daily human photo captured in general environment, e.g., on street, finding similar clothing in online shops, where the photos are captured more professionally and with clean background. There are large discrepancies between daily photo scenario and online shopping scenario. We first propose to alleviate the human pose discrepancy by locating 30 human parts detected by a well trained human detector. Then, founded on part features, we propose a two-step calculation to obtain more reliable one-to-many similarities between the query daily photo and online shopping photos: 1) the within-scenario one-to-many similarities between a query daily photo and the auxiliary set are derived by direct sparse reconstruction; and 2) by a cross-scenario many-to-many similarity transfer matrix inferred offline from an extra auxiliary set and the online shopping set, the reliable cross-scenario one-to-many similarities between the query daily photo and all online shopping photos are obtained. We collect a large online shopping dataset and a daily photo dataset, both of which are thoroughly labeled with 15 clothing attributes via Mechanic Turk. The extensive experimental evaluations on the collected datasets well demonstrate the effectiveness of the proposed framework for cross-scenario clothing retrieval.