Semantic analysis of basketball video using motion information

  • Authors:
  • Song Liu;Haoran Yi;Liang-Tien Chia;Deepu Rajan;Syin Chan

  • Affiliations:
  • Center for Multimedia and Network Technology, School of Computer Engineering, Nanyang Technological University, Singapore;Center for Multimedia and Network Technology, School of Computer Engineering, Nanyang Technological University, Singapore;Center for Multimedia and Network Technology, School of Computer Engineering, Nanyang Technological University, Singapore;Center for Multimedia and Network Technology, School of Computer Engineering, Nanyang Technological University, Singapore;Center for Multimedia and Network Technology, School of Computer Engineering, Nanyang Technological University, Singapore

  • Venue:
  • PCM'04 Proceedings of the 5th Pacific Rim conference on Advances in Multimedia Information Processing - Volume Part I
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new method for extracting semantic information from basketball video. Our approach consists of three stages: shot and scene boundary detection, scene classification and semantic video analysis for event detection. The scene boundary detection algorithm is based on both visual and motion prediction information. After the shot and scene boundary detection, a set of visual and motion features are extracted from scene or shot. The motion features, describing the total motion, camera motion and object motion within the scene respectively, are computed from the motion vector of the compressed video using an iterative algorithm with robust outlier rejection. Finally, the extracted features are used to differentiate offensive/defensive activities in the scenes. By analyzing the offensive/defensive activities, the positions of potential semantic events, such as foul and goal, are located. Experimental results demonstrate the effectiveness of the proposed method.