Find you wherever you are: geographic location and environment context-based pedestrian detection

  • Authors:
  • Yuan Liu;Zhongchao Shi;Gang Wang;Haike Guan

  • Affiliations:
  • Ricoh Software Research Center (Beijing) Co., Ltd., Beijing, China;Ricoh Software Research Center (Beijing) Co., Ltd., Beijing, China;Ricoh Software Research Center (Beijing) Co., Ltd., Beijing, China;Ricoh Company, Ltd., Yokohama, Japan

  • Venue:
  • Proceedings of the ACM multimedia 2012 workshop on Geotagging and its applications in multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most existing approaches to pedestrian detection only use the visual appearances as the main source in real world images. However, the visual information cannot always provide reliable guidance since pedestrians often change pose or wear different clothes under different conditions. In this work, by leveraging a vast amount of Web images, we first construct a contextual image database, in which each image is automatically attached with geographic location (i.e., latitude and longitude) and environment information (i.e., season, time and weather condition), assisted by image metadata and a few pre-trained classifiers. For the further pedestrian detection, an annotation scheme is presented which can sharply decrease manual labeling efforts. Several properties of the contextual image database are studied including whether the database is authentic and helpful for pedestrian detection. Moreover, we propose a context-based pedestrian detection approach by jointly exploring visual and contextual cues in a probabilistic model. Encouraging results are reported on our contextual image database.