A learning scheme for generating expressive music performances of jazz standards

  • Authors:
  • Rafael Ramirez;Amaury Hazan

  • Affiliations:
  • Music Technology Group, Pompeu Fabra University, Barcelona, Spain;Music Technology Group, Pompeu Fabra University, Barcelona, Spain

  • Venue:
  • IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe our approach for generating expressive music performances of monophonic Jazz melodies. It consists of three components: (a) a melodic transcription component which extracts a set of acoustic features from monophonic recordings, (b) a machine learning component which induces an expressive transformation model from the set of extracted acoustic features, and (c) a melody synthesis component which generates expressive monophonic output (MIDI or audio) from inexpressive melody descriptions using the induced expressive transformation model. In this paper we concentrate on the machine learning component, in particular, on the learning scheme we use for generating expressive audio from a score.