Simplified SOM-neural model for video segmentation of moving objects

  • Authors:
  • M. Mario I. Chacon;D. Sergio Gonzalez;P. Javier Vega

  • Affiliations:
  • DSP & Vision Laboratory, Chihuahua Institute of Technology, Mexico;DSP & Vision Laboratory, Chihuahua Institute of Technology, Mexico;DSP & Vision Laboratory, Chihuahua Institute of Technology, Mexico

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Background determination is crucial to visual intelligent surveillance systems. Although several methods have been proposed in the literature, research on this topic is still a paramount objective in the surveillance system community. High performance and low computational cost in a video segmentation model are some of the characteristics of the segmentation model presented in this paper. The model is designed to work with semi-static backgrounds. The segmentation model is based on a SOM like architecture. Weights neuron updates are performed in the fly to provide dynamic background actualization. The model keeps simplicity but it is tolerant to background variations like illumination, shadows, and slow moving background regions. The method was tested in several scenarios, including daytime and night situations, as well as interior and exterior scenarios. Qualitative and quantitative results of the model show high performance for normal backgrounds, and acceptable performance on high dynamic backgrounds, compared with complex models reported in the literature.