Computers and musical style
Theatrical storytelling in a virtual space
Proceedings of the 1st ACM workshop on Story representation, mechanism and context
The networked home as a user-centric multimedia system
Proceedings of the 2004 ACM workshop on Next-generation residential broadband challenges
Sustainable: a dynamic, robotic, sound installation
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
Network Dynamics in Sustainable: a robotic sound installation
Organised Sound
Evolutionary rhythm composition with trajectory-based fitness evaluation
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Music composition based on linguistic approach
MICAI'10 Proceedings of the 9th Mexican international conference on Advances in artificial intelligence: Part I
Resource-efficient and reliable long term wireless monitoring of the photoplethysmographic signal
Proceedings of the 2nd Conference on Wireless Health
Hi-index | 0.00 |
The automated creation of perceptible and compelling large-scale forms and hierarchical structures that unfold over time is a nontrivial challenge for generative models of multimedia content. Nonetheless, this is an important goal for multimedia authors and artists who work in time-dependent mediums. This paper and associated demonstration materials present a generative model for the automated composition of music.The model draws on theories of emotion and meaning in music, and relies on research in cognition and perception to ensure that the generated music will be communicative and intelligible to listeners. The model employs a coevolutionary genetic algorithm that is comprised of a population of musical components. The evolutionary process yields musical compositions which are realized as digital audio, a live performance work, and a musical score in conventional notation. These works exhibit musical features which are in accordance with aesthetic and compositional goals described in the paper.