Story unit segmentation with friendly acoustic perception

  • Authors:
  • Longchuan Yan;Jun Du;Qingming Huang;Shuqiang Jiang

  • Affiliations:
  • Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China and Graduate School of Chinese Academy of Sciences, Beijing, China;NEC Laboratories China, Beijing, China;Graduate School of Chinese Academy of Sciences, Beijing, China;Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China

  • Venue:
  • MCAM'07 Proceedings of the 2007 international conference on Multimedia content analysis and mining
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic story unit segmentation is an essential technique for content based video retrieval and summarization. A good video story unit has complete content and natural boundary in visual and acoustic perception, respectively. In this paper, a method of acoustic perception friendly story unit segmentation for broadcast soccer video is proposed. The approach combines replay detection, view pattern and non-speech detection to segment story units. Firstly, a replay detection method is implemented to find the highlight events in soccer video. Secondly, based on positions of replay clips, an FSM (Fine State Machine) is used to obtain rough starting points of story units. Finally, audio boundary alignment is employed to locate natural audio boundaries for acoustic perception. The algorithm is tested on several broadcast soccer videos. The story units segmented by algorithms with and without audio alignment are compared in acoustic perception. The experimental results indicate the performance of the proposed algorithm is encouraging and effective.