Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
OpenSound Control: state of the art 2003
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
Timbre interfaces using adjectives and adverbs
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Managing gesture and timbre for analysis and instrument control in an interactive environment
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Continuous-Time recurrent neural networks for generative and interactive musical performance
EuroGP'06 Proceedings of the 2006 international conference on Applications of Evolutionary Computing
From evolutionary composition to robotic sonification
EvoCOMNET'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part II
Hi-index | 0.00 |
This paper describes an automated computer improviser which attempts to follow and improvise against the frequencies and timbres found in an incoming audio stream. The improviser is controlled by an ever changing set of sequences which are generated by analysing the incoming audio stream (which may be a feed from a live musician) for its physical and musical properties such as pitch and amplitude. Control data from these sequences is passed to the synthesis engine where it is used to configure sonic events. These sonic events are generated using sound synthesis algorithms designed by an unsupervised genetic algorithm where the fitness function compares snapshots of the incoming audio to snapshots of the audio output of the evolving synthesizers in the spectral domain in order to drive the population to match the incoming sounds. The sound generating performance system and sound designing evolutionary system operate in real time in parallel to produce an interactive stream of synthesised sound. An overview of related systems is provided, this system is described then some preliminary results are presented.