Facing the music: a facial action controlled musical interface
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Robust face feature analysis for automatic speechreading and character animation
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Mouthbrush: drawing and painting by hand and mouth
Proceedings of the 5th international conference on Multimodal interfaces
Tongue 'n' Groove: an ultrasound based music controller
NIME '02 Proceedings of the 2002 conference on New interfaces for musical expression
Creating sustained tones with the cicada's rapid sequential buckling mechanism
NIME '02 Proceedings of the 2002 conference on New interfaces for musical expression
Interactive Gesture Music performance interface
NIME '02 Proceedings of the 2002 conference on New interfaces for musical expression
Designing, playing, and performing with a vision-based mouth interface
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
E-mic: extended mic-stand interface controller
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
Head-tracking for gestural and continuous control of parameterized audio effects
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
Sonify your face: facial expressions for sound generation
Proceedings of the international conference on Multimedia
Advances in new interfaces for musical expression
ACM SIGGRAPH 2011 Courses
Eye. breathe. music: creating music through minimal movement
EVA'10 Proceedings of the 2010 international conference on Electronic Visualisation and the Arts
Advances in new interfaces for musical expression
SIGGRAPH Asia 2012 Courses
Caruso: augmenting users with a tenor's voice
Proceedings of the 4th Augmented Human International Conference
Tangible and body-related interaction techniques for a singing voice synthesis installation
Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction
Creating new interfaces for musical expression
SIGGRAPH Asia 2013 Courses
Hi-index | 0.00 |
We describe a simple, computationally light, real-time system for tracking the lower face and extracting information about the shape of the open mouth from a video sequence. The system allows unencumbered control of audio synthesis modules by action of the mouth. We report work in progress to use the mouth controller to interact with a physical model of sound production by the avian syrinx.