Structure and content-based segmentation of speech transcripts

  • Authors:
  • Dulce Ponceleon;Savitha Srinivasan

  • Affiliations:
  • IBM Almaden Research Center, San Jose, CA;IBM Almaden Research Center, San Jose, CA

  • Venue:
  • Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

algorithm for the segmentation of an audio/video source into topically cohesive segments based on automatic speech recognition (ASR) transcriptions is presented. A novel two-pass algorithm is described that combines a boundary-based method with a content-based method. In the first pass, the temporal proximity and the rate of arrival of ngram features is analyzed in order to compute an initial segmentation. In the content- based second pass, changes in content-bearing words are detected by using the ngram features as queries in an information-retrieval system. The second pass validates the initial segments and merges them as needed. Feasibility of the segmentation task can vary enormously depending on the structure of the audio content, and the accuracy of ASR. For real-world corporate training data our method identifies, at worst, a single salient segment of the audio and, at best, a high-level table-of-contents. We illustrate the algorithm in detail with some examples and validate the results with segmentation boundaries generated manually.