Performance Control Driven Violin Timbre Model Based on Neural Networks

  • Authors:
  • A. P. Carrillo;J. Bonada;E. Maestre;E. Guaus;M. Blaauw

  • Affiliations:
  • MTG (Music Technol. Group), Univ. Pompeu Fabra, Barcelona, Spain;-;-;-;-

  • Venue:
  • IEEE Transactions on Audio, Speech, and Language Processing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The objective of this research is to model the relationship between actions performed by a violinist and the sound which these actions produce. Violinist actions and audio are captured during real performances by means of a newly developed sensing system from which bowing and audio descriptors are computed. A database is built with this data and used to train a generative model based on neural networks. The model is driven by a continuous sequence of bowing and fingering controls and is able to generate their corresponding sequence of spectral envelopes. The model is used for synthesis, either alone as a purely spectral model, by filling the predicted envelopes with harmonic and noisy components, or coupled with a concatenative synthesizer, where the predicted envelopes are used as time-varying filters to transform the concatenated samples. The combination of sample concatenation with the timbre model allows for the preservation of sound quality inherent in samples, while providing a high level of control. Additionally, we perform an analysis of the violinist control space and the influence of the controls on the timbre.