Object detection based on a robust and accurate statistical multi-point-pair model

  • Authors:
  • Xinyue Zhao;Yutaka Satoh;Hidenori Takauji;Shun'ichi Kaneko;Kenji Iwata;Ryushi Ozaki

  • Affiliations:
  • Graduate School of Information Science and Technology, Hokkaido University, Hokkaido, Japan;National Institute of Advanced Industrial Science and Technology (AIST), Ibaraki, Japan;Graduate School of Information Science and Technology, Hokkaido University, Hokkaido, Japan;Graduate School of Information Science and Technology, Hokkaido University, Hokkaido, Japan;National Institute of Advanced Industrial Science and Technology (AIST), Ibaraki, Japan;Tsukuba University, Tsukuba, Japan

  • Venue:
  • Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we propose a robust and accurate background model, called grayscale arranging pairs (GAP). The model is based on the statistical reach feature (SRF), which is defined as a set of statistical pair-wise features. Using the GAP model, moving objects are successfully detected under a variety of complex environmental conditions. The main concept of the proposed method is the use of multiple point pairs that exhibit a stable statistical intensity relationship as a background model. The intensity difference between pixels of the pair is much more stable than the intensity of a single pixel, especially in varying environments. Our proposed method focuses more on the history of global spatial correlations between pixels than on the history of any given pixel or local spatial correlations. Furthermore, we clarify how to reduce the GAP modeling time and present experimental results comparing GAP with existing object detection methods, demonstrating that superior object detection with higher precision and recall rates is achieved by GAP.