Performer-centered visual feedback for human-machine improvisation

  • Authors:
  • Alexandre R. J. François;E. Chew;Dennis Thurmond

  • Affiliations:
  • University of Southern California, Los Angeles, and Harvey Mudd College, Claremont, CA;University of Southern California;University of Southern California

  • Venue:
  • Computers in Entertainment (CIE) - Theoretical and Practical Computer Applications in Entertainment
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article describes the design and implementation of the Multimodal Interactive Musical Improvisation (Mimi) system. Unique to Mimi is its visual interface, which provides the performer with instantaneous and continuous information on the state of the system, in contrast to other human-machine improvisation systems, which require performers to grasp and intuit possible extemporizations in response to machine-generated music without forewarning. In Mimi, the information displayed extends into the near future and reaches back into the recent past, allowing the performer awareness of the musical context so as to plan their response accordingly. This article presents the details of Mimi's system design, the visual interface, and its implementation using the formalism defined by François' Software Architecture for Immersipresence (SAI) framework. Mimi is the result of a collaborative iterative design process. We have recorded the design sessions and present here findings from the transcripts that provide evidence for the impact of visual support on improvisation planning and design. The findings demonstrate that Mimi's visual interface offers musicians the opportunity to anticipate and to review decisions, thus making it an ideal performance and pedagogical tool for improvisation. It allows novices to create more contextually relevant improvisations and experts to be more inventive in their extemporizations.