Discriminant feature analysis for music timbre recognition and automatic indexing

  • Authors:
  • Xin Zhang;Zbigniew W. Raś;Agnieszka Dardzińska

  • Affiliations:
  • Univ. of North Carolina, Dept. of Comp. Science, Charlotte, NC;Univ. of North Carolina, Dept. of Comp. Science, Charlotte, NC and Polish-Japanese Institute of Information Technology, Warsaw, Poland;Bialystok Technical Univ., Dept. of Comp. Science, Bialystok, Poland

  • Venue:
  • MCD'07 Proceedings of the 3rd ECML/PKDD international conference on Mining complex data
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The high volume of digital music recordings in the internet repositories has brought a tremendous need for a cooperative recommendation system to help users to find their favorite music pieces. Music instrument identification is one of the important subtasks of a content-based automatic indexing, for which authors developed novel new temporal features and built a multi-hierarchical decision system S with all the low-level MPEG7 descriptors as well as other popular descriptors for describing music sound objects. The decision attributes in S are hierarchical and they include Hornbostel-Sachs classification and generalization by articulation. The information richness hidden in these descriptors has strong implication on the confidence of classifiers built from S. Rule-based classifiers give us approximate definitions of values of decision attributes and they are used as a tool by content-based Automatic Indexing Systems (AIS). Hierarchical decision attributes allow us to have the indexing done on different granularity levels of classes of music instruments. We can identify not only the instruments playing in a given music piece but also classes of instruments if the instrument level identification fails. The quality of AIS can be verified using precision and recall based on two interpretations: user and system-based [16]. AIS engine follows system-based interpretation.