Pedestrian detection in unseen scenes by dynamically updating visual words

  • Authors:
  • Xianbin Cao;Li Wang;Bo Ning;Yuan Yuan;Pingkun Yan

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

Adapting trained detectors to unseen scenes is a critical problem in pedestrian detection. The performance of trained detector may drop quickly when scenes vary significantly. Retraining a detector with labeled samples from the new scenes may improve its performance. However, it is difficult to obtain enough labeled samples in real applications. In this paper, a novel bag of visual words based method is proposed to detect pedestrians in unseen scenes by dynamically updating the key words. The proposed method achieves its adaptability by using three strategies covering key word selection, detector invariance, and codebook update: (1) In order to select typical words representing pedestrians, a low dimensional model of visual words is built to describe their distribution and select key words using manifold learning. (2) Matching confidence vector (MCV), a novel visual words measurement is proposed, which aims to generate a uniform input vector for the fixed detector applied to different pedestrian codebooks. (3) When detecting pedestrians under changing road conditions, the key word set will be dynamically adjusted according to the matching frequency of each word to adapt the detector to the new scenes. By employing the above strategies, the proposed method is able to detect pedestrians in different scenes without retraining the detector. Experiments in different scenes showed that our proposed method can achieve better adaptability to various scenes and get better performance than other existing methods in unseen scenes.