Domain transfer for person re-identification

  • Authors:
  • Ryan Layne;Timothy M. Hospedales;Shaogang Gong

  • Affiliations:
  • Queen Mary University of London, London, United Kingdom;Queen Mary University of London, London, United Kingdom;Queen Mary University of London, London, United Kingdom

  • Venue:
  • Proceedings of the 4th ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic person re-identification in is a crucial capability underpinning many applications in public space video surveillance. It is challenging due to intra-class variation in person appearance when observed in different views, together with limited inter-class variability. Various recent approaches have made great progress in re-identification performance using discriminative learning techniques. However, these approaches are fundamentally limited by the requirement of extensive annotated training data for every pair of views. For practical re-identification, this is an unreasonable assumption, as annotating extensive volumes of data for every pair of cameras to be re-identified may be impossible or prohibitively expensive. In this paper we move toward relaxing this strong assumption by investigating flexible multi-source transfer of re-identification models across camera pairs. Specifically, we show how to leverage prior re-identification models learned for a set of source view pairs (domains), and flexibly combine these to obtain good re-identification performance in a target view pair (domain) with greatly reduced training data requirements in the target domain.