The power of amnesia: learning probabilistic automata with variable memory length
Machine Learning - Special issue on COLT '94
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
GenJam: evolution of a jazz improviser
Creative evolutionary systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Factor Oracle: A New Structure for Pattern Matching
SOFSEM '99 Proceedings of the 26th Conference on Current Trends in Theory and Practice of Informatics on Theory and Practice of Informatics
Using Factor Oracles for Machine Improvisation
Soft Computing - A Fusion of Foundations, Methodologies and Applications
Toward a formal study of jazz chord sequences generated by Steedman’s grammar
Soft Computing - A Fusion of Foundations, Methodologies and Applications
OpenSound Control: state of the art 2003
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
Computer-Assisted Composition at IRCAM: From PatchWork to OpenMusic
Computer Music Journal
Visual feedback in performer-machine interaction for musical improvisation
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
Algorithmic composition: computational thinking in music
Communications of the ACM
Interactively evolving harmonies through functional scaffolding
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Emergent formal structures of factor oracle-driven musical improvisations
MCM'11 Proceedings of the Third international conference on Mathematics and computation in music
Performer-centered visual feedback for human-machine improvisation
Computers in Entertainment (CIE) - Theoretical and Practical Computer Applications in Entertainment
Hi-index | 0.02 |
We describe a multi-agent architecture for an improvization oriented musician-machine interaction system that learns in real time from human performers. The improvization kernel is based on sequence modeling and statistical learning. The working system involves a hybrid architecture using two popular composition/perfomance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The system is capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvization practices, the statistical modeling tools and the concurrent agent architecture are presented. Finally, a prospective Reinforcement Learning scheme for enhancing the system's realism is described.