Background modeling using color, disparity, and motion information

  • Authors:
  • Jong Weon Lee;Hyo Sung Jeon;Sung Min Moon;Sung W. Baik

  • Affiliations:
  • Center for Emotion and Robot Vision, Sejong University, Seoul, Korea;Center for Emotion and Robot Vision, Sejong University, Seoul, Korea;Center for Emotion and Robot Vision, Sejong University, Seoul, Korea;Center for Emotion and Robot Vision, Sejong University, Seoul, Korea

  • Venue:
  • ACIVS'05 Proceedings of the 7th international conference on Advanced Concepts for Intelligent Vision Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A new background modeling approach is presented in this paper. In most background modeling approaches, input images are categorized into foreground and background regions using pixel-based operations. Because pixels on the input image are considered individually, parts of foreground regions are frequently turned into the background, and these errors cause incorrect foreground detections. The proposed approach reduces these errors and improves the accuracy of a background modeling. Each input image is categorized into three regions in the proposed approach instead of two regions, background and foreground regions. The proposed approach divides traditional foreground regions into two sub-regions, intermediate background and foreground regions, using activity measurements computed from optical flows at each pixel. The other difference of the proposed approach is grouping pixels into objects and using those objects at the background updating procedure. Pixels on each object are turned into the background at the same rate. The rate of each object is computed differently depending on its category. By controlling the rate of turning input pixels into the background accurately, the proposed approach can model the background accurately.