Using normalized RBF networks to map hand gestures to speech
Radial basis function networks 2
Input devices for musical expression: borrowing tools from HCI
NIME '01 Proceedings of the 2001 conference on New interfaces for musical expression
IEEE Transactions on Neural Networks
HandSketch bi-manual controller: investigation on expressive control issues of an augmented tablet
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
Wireless sensor interface and gesture-follower for music pedagogy
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
Creating new interfaces for musical expression: introduction to NIME
ACM SIGGRAPH 2009 Courses
Binaural mixing using gestural control interaction
Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound
Performance: what does a body know
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Performance: what does a body know?
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Advances in new interfaces for musical expression
ACM SIGGRAPH 2011 Courses
Advances in new interfaces for musical expression
SIGGRAPH Asia 2012 Courses
Creating new interfaces for musical expression
SIGGRAPH Asia 2013 Courses
Hi-index | 0.00 |
We describe the implementation of an environment for Gesturally-Realized Audio, Speech and Song Performance (GRASSP), which includes a glove-based interface, a mapping/training interface, and a collection of Max/MSP/Jitter bpatchers that allow the user to improvise speech, song, sound synthesis, sound processing, sound localization, and video processing. The mapping/training interface provides a framework for performers to specify by example the mapping between gesture and sound or video controls. We demonstrate the effectiveness of the GRASSP environment for gestural control of musical expression by creating a gesture-to-voice system that is currently being used by performers.