Modeling and segmentation of audio descriptor profiles with segmental models

  • Authors:
  • Julien Bloit;Nicolas Rasamimanana;Frédéric Bevilacqua

  • Affiliations:
  • Ircam CNRS UMR STMS, 1 Place Igor Stravinsky, 75004 Paris, France;Ircam CNRS UMR STMS, 1 Place Igor Stravinsky, 75004 Paris, France;Ircam CNRS UMR STMS, 1 Place Igor Stravinsky, 75004 Paris, France

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2010

Quantified Score

Hi-index 0.10

Visualization

Abstract

We present a method to model sound descriptor temporal profiles using segmental models. Unlike standard HMM, such an approach allows for the modeling of fine structures of temporal profiles with a reduced number of states. These states, we called primitives, can be chosen by the user using prior knowledge, and assembled to model symbolic musical elements. In this paper, we describe this general methodology and evaluate it on a dataset made of violin recording containing crescendo/decrescendo, glissando and sforzando. The results show that, in this context, the segmental model can segment and recognize these different musical elements with a satisfactory level.