Discriminating Between Pitched Sources in Music Audio

  • Authors:
  • M. R. Every

  • Affiliations:
  • Audience, Inc., Mountain View, CA

  • Venue:
  • IEEE Transactions on Audio, Speech, and Language Processing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Though humans find it relatively easy to identify and/or isolate different sources within polyphonic music, the emulation of this ability by a computer is a challenging task, and one that has direct relevance to music content description and information retrieval applications. For an automated system without any prior knowledge of a recording, a possible solution is to perform an initial segmentation of the recording into notes or regions with some time-frequency contiguity, and then collect into groups those units that are acoustically similar, and hence have a high likelihood of arising from a common source. This article addresses the second subtask, and provides two main contributions: (1) a derivation of a suboptimal subset out of a wide range of common audio features that maximizes the potential to discriminate between pitched sources in polyphonic music and (2) an estimation of the improvement in accuracy that can be achieved by using features other than pitch in the grouping process. In addition, the hypothesis was tested that more discriminatory features can be obtained through the application of source separation techniques prior to feature computation. Machine learning techniques have been applied to an annotated database of polyphonic recordings (containing 3181 labeled audio segments) spanning a wide range of musical genres. Average source-labeling accuracies of 68% and 76% were obtained with a 10-dimensional feature subset when the number of sources per recording was unknown and known a priori.