Polyphonic monotimbral music transcription using dynamic networks

  • Authors:
  • Antonio Pertusa;José M. Iñesta

  • Affiliations:
  • Departamento de Lenguajes y Sistemas Informáticos, Universidad de Alicante, Apartado de correos, 99 Alicante 03080, Spain;Departamento de Lenguajes y Sistemas Informáticos, Universidad de Alicante, Apartado de correos, 99 Alicante 03080, Spain

  • Venue:
  • Pattern Recognition Letters - Special issue: Artificial neural networks in pattern recognition
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The automatic extraction of the notes that were played in a digital musical signal (automatic music transcription) is an open problem. A number of techniques have been applied to solve it without concluding results. The monotimbral polyphonic version of the problem is posed here: a single instrument has been played and more than one note can sound at the same time. This work tries to approach it through the identification of the pattern of a given instrument in the frequency domain. This is achieved using time-delay neural networks that are fed with the band-grouped spectrogram of a polyphonic monotimbral music recording. The use of a learning scheme based on examples like neural networks permits our system to avoid the use of an auditory model to approach this problem. A number of issues have to be faced to have a robust and powerful system, but promising results using synthesized instruments are presented.