View independent object classification by exploring scene consistency information for traffic scene surveillance

  • Authors:
  • Zhaoxiang Zhang;Kaiqi Huang;Yunhong Wang;Min Li

  • Affiliations:
  • Laboratory of Intelligent Recognition and Image Processing, Beijing Key Laboratory of Digital Media, School of Computer Science and Engineering, Beihang University, Beijing 100191, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;Laboratory of Intelligent Recognition and Image Processing, Beijing Key Laboratory of Digital Media, School of Computer Science and Engineering, Beihang University, Beijing 100191, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

We address the problem of view independent object classification. Our aim is to classify moving objects in traffic scenes surveillance videos into pedestrians, bicycles and vehicles. However, this problem is very challenging due to the following aspects. Firstly, regions of interest in videos are of low resolution and limited size due to the capacity of conventional surveillance cameras. Secondly, the intra-class variations are very large due to changes in view angles, lighting conditions and environments. Thirdly, real-time performance of algorithms is always required for real applications. Especially, perspective distortions of surveillance cameras make most 2D object features like size and speed related to view angles and not suitable for object classification. In this paper, we try to explore the hidden information of traffic scenes to deal with perspective distortions of surveillance cameras. Two solutions are given to achieve automatic object classification based on simple motion and shape features on the 2D image plane, both of which are free of large database collection and manually labeling. Abundant experiments of the two methods are conducted in videos of difference scenes and experimental results demonstrate the performance of our approaches.