A data model for music information retrieval

  • Authors:
  • Tamar Berman

  • Affiliations:
  • Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, Champaign, IL

  • Venue:
  • NGITS'06 Proceedings of the 6th international conference on Next Generation Information Technologies and Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a data model for the representation of tonal music. In this model, music is conceived as an equally-spaced time series of 12-dimensional vectors. The model has been successfully applied to the task of discovering frequently recurring patterns, and to the related task of retrieving user-defined musical patterns. This was accomplished by converting midi sequences of music by W.A. Mozart into the time series representation and analyzing these with data mining tools and SQL queries. The novelty of the pattern extraction capability supported by the model is in the potentially complex description of the sequences, which may contain both melodic and harmonic features, may be embedded within each other, or interspersed with other patterns or occurrences. A unique feature of the model is the use of time intervals as the basic representational unit, which fosters possibilities for future application to audio data.