Video segmentation for content-based coding

  • Authors:
  • T. Meier;K. N. Ngan

  • Affiliations:
  • Dept. of Electr. & Electron. Eng., Western Australia Univ., Nedlands, WA;-

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

To provide multimedia applications with new functionalities, the new video coding standard MPEG-4 relies on a content-based representation. This requires a prior decomposition of sequences into semantically meaningful, physical objects. We formulate this problem as one of separating foreground objects from the background based on motion information. For the object of interest, a 2D binary model is derived and tracked throughout the sequence. The model points consist of edge pixels detected by the Canny operator. To accommodate rotation and changes in shape of the tracked object, the model is updated every frame. These binary models then guide the actual video object plane (VOP) extraction. Thanks to our new boundary postprocessor and the excellent edge localization properties of the Canny operator, the resulting VOP contours are very accurate. Both the model initialization and update stages exploit motion information. The main assumption underlying our approach is the existence of a dominant global motion that can be assigned to the background. Areas that do not follow this background motion indicate the presence of independently moving physical objects. Two alternative methods to identify such objects are presented. The first one employs a morphological motion filter with a new filter criterion, which measures the deviation of the locally estimated optical flow from the corresponding global motion. The second method computes a change detection mask by taking the difference between consecutive frames. The first version is more suitable for sequences with little motion, whereas the second version is better at dealing with faster moving or changing objects. Experimental results demonstrate the performance of our algorithm