Automatic music transcription: challenges and future directions

  • Authors:
  • Emmanouil Benetos;Simon Dixon;Dimitrios Giannoulis;Holger Kirchhoff;Anssi Klapuri

  • Affiliations:
  • Department of Computer Science, City University London, London, UK;Centre for Digital Music, Queen Mary University of London, London, UK;Centre for Digital Music, Queen Mary University of London, London, UK;Centre for Digital Music, Queen Mary University of London, London, UK;Ovelin Ltd., Helsinki, Finland 00100 and Tampere University of Technology, Tampere, Finland 33720

  • Venue:
  • Journal of Intelligent Information Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic music transcription is considered by many to be a key enabling technology in music signal processing. However, the performance of transcription systems is still significantly below that of a human expert, and accuracies reported in recent years seem to have reached a limit, although the field is still very active. In this paper we analyse limitations of current methods and identify promising directions for future research. Current transcription methods use general purpose models which are unable to capture the rich diversity found in music signals. One way to overcome the limited performance of transcription systems is to tailor algorithms to specific use-cases. Semi-automatic approaches are another way of achieving a more reliable transcription. Also, the wealth of musical scores and corresponding audio data now available are a rich potential source of training data, via forced alignment of audio to scores, but large scale utilisation of such data has yet to be attempted. Other promising approaches include the integration of information from multiple algorithms and different musical aspects.