Static and dynamic classification methods for polyphonic transcription of piano pieces in different musical styles

  • Authors:
  • Giovanni Costantini;Massimiliano Todisco;Massimo Carota;Daniele Casali

  • Affiliations:
  • Department of Electronic Engineering, University of Rome "Tor Vergata", Italy and Institute of Acoustics "O. M. Corbino", Roma, Italy;Department of Electronic Engineering, University of Rome "Tor Vergata", Italy;Department of Electronic Engineering, University of Rome "Tor Vergata", Italy;Department of Electronic Engineering, University of Rome "Tor Vergata", Italy

  • Venue:
  • ICC'08 Proceedings of the 12th WSEAS international conference on Circuits
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present two methods based on neural networks for the automatic transcription of polyphonic piano music. The input to these methods consists in piano music recordings stored in WAV files, while the pitch of all the notes in the corresponding score forms the output. The aim of this work is to compare the accuracy achieved using a feed-forward neural network, such as the MLP (MultiLayer Perceptron), with that supplied by a recurrent neural network, such as the ENN (Elman Neural Network). Signal processing techniques based on the CQT (Constant-Q Transform) are used in order to create a time-frequency representation of the input signals. Since large scale tests were required, the whole process (synthesis of audio data generated starting from MIDI files, comparison of the results with the original score) has been automated. Test, validation and training sets have been generated with reference to three different musical styles respectively represented by J.S Bach's inventions, F. Chopin's nocturnes and C. Debussy's preludes.