Changing timbre and phrase in existing musical performances as you like: manipulations of single part using harmonic and inharmonic models

  • Authors:
  • Naoki Yasuraoka;Takehiro Abe;Katsutoshi Itoyama;Toru Takahashi;Tetsuya Ogata;Hiroshi G. Okuno

  • Affiliations:
  • Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan

  • Venue:
  • MM '09 Proceedings of the 17th ACM international conference on Multimedia
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new music manipulation method that can change the timbre and phrases of an existing instrumental performance in a polyphonic sound mixture. This method consists of three primitive functions: 1) extracting and analyzing of a single instrumental part from polyphonic music signals, 2) mixing the instrument timbre with another, and 3) rendering a new phrase expression for another given score. The resulting customized part is re-mixed with the remaining parts of the original performance to generate new polyphonic music signals. A single instrumental part is extracted by using an integrated tone model that consists of harmonic and inharmonic tone models with the aid of the score of the single instrumental part. The extraction incorporates a residual model for the single instrumental part in order to avoid crosstalk between instrumental parts. The extracted model parameters are classified into their averages and deviations. The former is treated as instrument timbre and is customized by mixing, while the latter is treated as phrase expression and is customized by rendering. We evaluated our method in three experiments. The first experiment focused on introduction of the residual model, and it showed that the model parameters are estimated more accurately by 35.0 points. The second focused on timbral customization, and it showed that our method is more robust by 42.9 points in spectral distance compared with a conventional sound analysis-synthesis method, STRAIGHT. The third focused on the acoustic fidelity of customizing performance, and it showed that rendering phrase expression according to the note sequence leads to more accurate performance by 9.2 points in spectral distance in comparison with a rendering method that ignores the note sequence.