A decision-tree-based algorithm for speech/music classification and segmentation

  • Authors:
  • Yizhar Lavner;Dima Ruinskiy

  • Affiliations:
  • Department of Computer Science, Tel-Hai College, Tel-Hai, Israel;Department of Computer Science, Tel-Hai College, Tel-Hai, Israel and Israeli Development Center, Intel Corporation, Haifa, Israel

  • Venue:
  • EURASIP Journal on Audio, Speech, and Music Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.