Automatic video segmentation using genetic algorithms

  • Authors:
  • Eun Yi Kim;Se Hyun Park

  • Affiliations:
  • Department of Internet and Multimedia Engineering, Next-generation Innovative Technology Research Institute, Konkuk University, 1 Hwayang-dong, Gwangjin-gu, Seoul, Republic of Korea;School of Computer and Communication, Daegu University, Republic of Korea

  • Venue:
  • Pattern Recognition Letters - Special issue: Evolutionary computer vision and image understanding
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The current paper proposes a genetic algorithm (GA)-based segmentation method that can automatically extract and track moving objects. The proposed method mainly consists of spatial and temporal segmentation; the spatial segmentation divides each frame into regions with accurate boundaries, and the temporal segmentation divides each frame into background and foreground areas. The spatial segmentation is performed using individuals that evolve distributed genetic algorithms (DGAs). However, unlike standard DGAs, the individuals are initiated from the segmentation result of the previous frame, then only unstable individuals corresponding to actual moving object parts are evolved by mating operators. For the temporal segmentation, adaptive thresholding is performed based on the intensity difference between two consecutive frames. The spatial and temporal segmentation results are then combined for object extraction, and tracking is performed using the natural correspondence established by the proposed spatial segmentation method. The main advantages of the proposed method are twofold: first, proposed video segmentation method does not require any a priori information; second, the proposed GA-based segmentation method enhances the search efficiency and incorporates a tracking algorithm within its own architecture. These advantages were confirmed by experiments where the proposed method was successfully applied to well-known and natural video sequences.