Adaptive learning of multi-subspace for foreground detection under illumination changes

  • Authors:
  • Y. Dong;G. N. DeSouza

  • Affiliations:
  • Electrical and Computer Engineering Department, University of Missouri, Columbia, MO, USA;Electrical and Computer Engineering Department, University of Missouri, Columbia, MO, USA

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a new adaptive learning algorithm using multiple eigensubspaces to handle sudden as well as gradual changes in background due for example to illumination variations. To handle such changes, the feature space is organized into clusters representing the different background appearances. A local principal component analysis transformation is used to learn a separate eigensubspace for each cluster and an adaptive learning is used to continuously update the eigensubspaces. When the current image is presented, the system automatically selects a learned subspace that shares the closest appearance and lighting condition with the input image, which is then projected onto the subspace so that both background and foreground pixels can be classified. To efficiently adapt to changes in lighting conditions, an incremental update of the multiple eigensubspaces using synthetic background appearances is included in our framework. By doing so, our system can eliminate any noise or distortions that otherwise would incur from the existence of foreground objects, while it correctly updates the specific eigensubspace representing the current background appearance. A forgetting factor is also employed to control the contribution of earlier observations and limit the number of learned subspaces. As the extensive experimental results with various benchmark sequences demonstrate, the proposed algorithm outperforms, quantitatively and qualitatively, many other appearance-based approaches as well as methods using Gaussian Mixture Model (GMM), especially under sudden and drastic changes in illumination. Finally, the proposed algorithm is demonstrated to be linear with the size of the images d, the number of basis in the local PCA m, and the number of images used for adaptation n: that is, the algorithm is O(dmn) and our C++ implementation runs in real time - i.e. at frame rate for normal resolution (VGA) images.