An efficient, chromatic clustering-based background model for embedded vision platforms

  • Authors:
  • Brian Valentine;Senyo Apewokin;Linda Wills;Scott Wills

  • Affiliations:
  • Georgia Institute of Technology, 801 Atlantic Drive, Atlanta, GA 30332-0280, United States;Georgia Institute of Technology, 801 Atlantic Drive, Atlanta, GA 30332-0280, United States;Georgia Institute of Technology, 801 Atlantic Drive, Atlanta, GA 30332-0280, United States;Georgia Institute of Technology, 801 Atlantic Drive, Atlanta, GA 30332-0280, United States

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

People naturally identify rapidly moving foreground and ignore persistent background. Identifying background pixels belonging to stable, chromatically clustered objects is important for efficient scene processing. This paper presents a technique that exploits this facet of human perception to improve performance and efficiency of background modeling on embedded vision platforms. Previous work on the Multimodal Mean (MMean) approach achieves high quality foreground extraction (comparable to Mixture of Gaussians (MoG)) using fast integer computation and a compact memory representation. This paper introduces a more efficient hybrid technique that combines MMean with palette-based background matching based on the chromatic distribution in the scene. This hybrid technique suppresses computationally expensive model update and adaptation, providing a 45% execution time speedup over MMean. It reduces model storage requirements by 58% over a MMean-only implementation. This background analysis enables higher frame rate, lower cost embedded vision systems.