High-Level Fusion of Depth and Intensity for Pedestrian Classification

  • Authors:
  • Marcus Rohrbach;Markus Enzweiler;Dariu M. Gavrila

  • Affiliations:
  • Environment Perception, Group Research, Daimler AG, Ulm, Germany and Dept. of Computer Science, TU Darmstadt, Germany;Image & Pattern Analysis Group, Dept. of Math.and Computer Science, Univ. of Heidelberg, Germany;Environment Perception, Group Research, Daimler AG, Ulm, Germany and Intelligent Systems Lab, Fac. of Science, Univ. of Amsterdam, The Netherlands

  • Venue:
  • Proceedings of the 31st DAGM Symposium on Pattern Recognition
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel approach to pedestrian classification which involves a high-level fusion of depth and intensity cues. Instead of utilizing depth information only in a pre-processing step, we propose to extract discriminative spatial features (gradient orientation histograms and local receptive fields) directly from (dense) depth and intensity images. Both modalities are represented in terms of individual feature spaces, in each of which a discriminative model is learned to distinguish between pedestrians and non-pedestrians. We refrain from the construction of a joint feature space, but instead employ a high-level fusion of depth and intensity at classifier-level. Our experiments on a large real-world dataset demonstrate a significant performance improvement of the combined intensity-depth representation over depth-only and intensity-only models (factor four reduction in false positives at comparable detection rates). Moreover, high-level fusion outperforms low-level fusion using a joint feature space approach.