Designing, playing, and performing with a vision-based mouth interface

  • Authors:
  • Michael J. Lyons;Michael Haehnel;Nobuji Tetsutani

  • Affiliations:
  • ATR Media Information Science Labs, Seika-cho, Soraku-gun, Kyoto, Japan;RWTH Aachen University, Aachen, Germany;ATR M.I.S. Labs, Seika-cho, Soraku-gun, Kyoto, Japan

  • Venue:
  • NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The role of the face and mouth in speech production as well as non-verbal communication suggests the use of facial action to control musical sound. Here we document work on the Mouthesizer, a system which uses a headworn miniature camera and computer vision algorithm to extract shape parameters from the mouth opening and output these as MIDI control changes. We report our experience with various gesture-to-sound mappings and musical applications, and describe a live performance which used the Mouthesizer interface.