Real-time spatiotemporal segmentation of video objects in the H.264 compressed domain

  • Authors:
  • Zhi Liu;Yu Lu;Zhaoyang Zhang

  • Affiliations:
  • School of Communication and Information Engineering, Shanghai University, Shanghai 200072, China and School of Information Systems, Computing and Mathematics, Brunel University, Uxbridge, Middlese ...;School of Communication and Information Engineering, Shanghai University, Shanghai 200072, China;School of Communication and Information Engineering, Shanghai University, Shanghai 200072, China

  • Venue:
  • Journal of Visual Communication and Image Representation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a real-time spatiotemporal segmentation approach to extract video objects in the H.264 compressed domain. The only exploited segmentation cue is the motion vector (MV) field extracted from the H.264 compressed video. MV field is first temporally and spatially normalized and then accumulated by an iteratively backward projection scheme to enhance the salient motion. Then global motion compensation is performed on the accumulated MV field, which is also moderately segmented into different motion-homogenous regions by a modified statistical region growing algorithm. The hypothesis testing using the block residuals of global motion compensation is employed for intra-frame classification of segmented regions, and the projection is exploited for inter-frame tracking of previous video objects. Using the above results of intra-frame classification and inter-frame tracking as input, a correspondence matrix based spatiotemporal segmentation approach is proposed to segment video objects under different situations including appearing and disappearing objects, splitting and merging objects, stopping moving objects, multiple object tracking and scene change in a unified and efficient way. Experimental results for several H.264 compressed video sequences demonstrate the real-time performance and good segmentation quality of the proposed approach.