Generative model for the creation of musical emotion, meaning, and form

  • Authors:
  • David Birchfield

  • Affiliations:
  • Arizona State University

  • Venue:
  • ETP '03 Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The automated creation of perceptible and compelling large-scale forms and hierarchical structures that unfold over time is a nontrivial challenge for generative models of multimedia content. Nonetheless, this is an important goal for multimedia authors and artists who work in time-dependent mediums. This paper and associated demonstration materials present a generative model for the automated composition of music.The model draws on theories of emotion and meaning in music, and relies on research in cognition and perception to ensure that the generated music will be communicative and intelligible to listeners. The model employs a coevolutionary genetic algorithm that is comprised of a population of musical components. The evolutionary process yields musical compositions which are realized as digital audio, a live performance work, and a musical score in conventional notation. These works exhibit musical features which are in accordance with aesthetic and compositional goals described in the paper.