Human appearance modeling for matching across video sequences

  • Authors:
  • Yang Yu;David Harwood;Kyongil Yoon;Larry S. Davis

  • Affiliations:
  • University of Maryland, Institute for Advanced Computer Studies (UMIACS), 20742, College Park, MD, USA;University of Maryland, Institute for Advanced Computer Studies (UMIACS), 20742, College Park, MD, USA;McDaniel College, Department of Mathematics and Computer Science, 21157, Westminster, MD, USA;University of Maryland, Institute for Advanced Computer Studies (UMIACS), 20742, College Park, MD, USA

  • Venue:
  • Machine Vision and Applications
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an appearance model for establishing correspondence between tracks of people which may be taken at different places, at different times or across different cameras. The appearance model is constructed by kernel density estimation. To incorporate structural information and to achieve invariance to motion and pose, besides color features, an additional feature of path-length is used. To achieve illumination invariance, two types of illumination insensitive color features are discussed: brightness color feature and RGB rank feature. The similarity between a test image and an appearance model is measured by the information gain or Kullback–Leibler distance. To thoroughly represent the information contained in a video sequence with as little data as possible, a key frame selection and matching scheme is proposed. Experimental results demonstrate the important role of the path-length feature in the appearance model and the effectiveness of the proposed appearance model and matching method.