Motion-Based Semantic Event Detection for Video Content Description in MPEG-7

  • Authors:
  • Duan-Yu Chen;Suh-Yin Lee

  • Affiliations:
  • -;-

  • Venue:
  • PCM '01 Proceedings of the Second IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we proposed an automatic two-level approach to segment videos into abstracted shots that are semantically meaningful mainly based on inferred video events. In the first level, ew detect scene changes in video sequence into shots. In the second level, each of the shots generated from level-1 is analyzed by utilizing the information of camera operations and object motion that are computed directly from motion vectors of MPEG-2 video streams in compressed domain. Events in tennis videos are than inferred from both object trajectories and applied specific domain knowledge. Video shots are further segmented based on detected video events and hence semantically meaningful video clips can be generated and can assist to annotate video shots, summarize video content, and generate descriptions and description schemes in MPEG-7 standard.