Analyzing discussion scene contents in instructional videos

  • Authors:
  • Ying Li;Chitra Dorai

  • Affiliations:
  • IBM T.J. Watson Research Center, NY;IBM T.J. Watson Research Center, NY

  • Venue:
  • Proceedings of the 12th annual ACM international conference on Multimedia
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes our current effort on analyzing the contents of discussion scenes in instructional videos based on a clustering technique. Specifically, given a discussion scene pre-detected from an education or training video, we first apply a mode-based clustering approach to group all speech segments into an optimal number of clusters where each cluster contains speech from one speaker; we then analyze the discussion patterns in the scene, and subsequently classify it into either a 2-speaker or multi-speaker discussion. Encouraging classification results have been achieved on 122 discussion scenes detected from five IBM MicroMBA videos. Moreover, we have also observed fairly good performance on the speaker clustering scheme, which demonstrates the superiority of the proposed clustering approach. Undoubtedly, the discussion scene information output from this analysis scheme would facilitate the content browsing, searching and understanding of instructional videos.