A morphological analysis of the design space of input devices
ACM Transactions on Information Systems (TOIS) - Special issue on computer—human interaction
Music, cognition, and computerized sound
Automatic Classification of Single Facial Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
Facing the music: a facial action controlled musical interface
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Tongue 'n' Groove: an ultrasound based music controller
NIME '02 Proceedings of the 2002 conference on New interfaces for musical expression
The importance of parameter mapping in electronic instrument design
NIME '02 Proceedings of the 2002 conference on New interfaces for musical expression
Interactive Gesture Music performance interface
NIME '02 Proceedings of the 2002 conference on New interfaces for musical expression
A novel face-tracking mouth controller and its application to interacting with bioacoustic models
NIME '04 Proceedings of the 2004 conference on New interfaces for musical expression
Sonification of facial actions for musical expression
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
Human-centered computing: a multimedia perspective
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
Intelligent Assistive Exoskeleton with Vision Based Interface
ICOST '08 Proceedings of the 6th international conference on Smart Homes and Health Telematics
Creating new interfaces for musical expression: introduction to NIME
ACM SIGGRAPH 2009 Courses
Sonify your face: facial expressions for sound generation
Proceedings of the international conference on Multimedia
Advances in new interfaces for musical expression
ACM SIGGRAPH 2011 Courses
Multimodal human computer interaction: a survey
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Advances in new interfaces for musical expression
SIGGRAPH Asia 2012 Courses
Caruso: augmenting users with a tenor's voice
Proceedings of the 4th Augmented Human International Conference
Tangible and body-related interaction techniques for a singing voice synthesis installation
Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction
Creating new interfaces for musical expression
SIGGRAPH Asia 2013 Courses
Hi-index | 0.00 |
The role of the face and mouth in speech production as well as non-verbal communication suggests the use of facial action to control musical sound. Here we document work on the Mouthesizer, a system which uses a headworn miniature camera and computer vision algorithm to extract shape parameters from the mouth opening and output these as MIDI control changes. We report our experience with various gesture-to-sound mappings and musical applications, and describe a live performance which used the Mouthesizer interface.