Intimacy and embodiment: implications for art and technology
MULTIMEDIA '00 Proceedings of the 2000 ACM workshops on Multimedia
Extraction of 2D Motion Trajectories and Its Application to Hand Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Overview of Capture Techniques for Studying Sign Language Phonetics
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
Visual Gesture Interfaces for Virtual Environments
AUIC '00 Proceedings of the First Australasian User Interface Conference
Designing intimate experiences
Proceedings of the 9th international conference on Intelligent user interfaces
Real-time adaptive control of modal synthesis
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
Bimanuality in alternate musical instruments
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
SoniMime: movement sonification for real-time timbre shaping
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
Real-time CALM synthesizer new approaches in hands-controlled voice synthesis
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
GRASSP: gesturally-realized audio, speech and song performance
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Definition and recovery of kinematic features for recognition of American sign language movements
Image and Vision Computing
Dynamic Grasp Recognition Using Time Clustering, Gaussian Mixture Models and Hidden Markov Models
ICIRA '08 Proceedings of the First International Conference on Intelligent Robotics and Applications: Part I
Real time trajectory based hand gesture recognition
WSEAS Transactions on Information Science and Applications
Sensor Modalities for Brain-Computer Interfacing
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
Creating new interfaces for musical expression: introduction to NIME
ACM SIGGRAPH 2009 Courses
Numerical input techniques for immersive virtual environments
VECIMS'09 Proceedings of the 2009 IEEE international conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems
Hand gesture recognition based on segmented singular value decomposition
KES'10 Proceedings of the 14th international conference on Knowledge-based and intelligent information and engineering systems: Part II
Enhancing hand gesture recognition using fuzzy clustering-based mixture-of-experts model
Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication
Trajectory based hand gesture recognition
CIMMACS'07 Proceedings of the 6th WSEAS international conference on Computational intelligence, man-machine systems and cybernetics
Advances in new interfaces for musical expression
ACM SIGGRAPH 2011 Courses
Text input tool for immersive VR based on 3×3 screen cells
ICHIT'11 Proceedings of the 5th international conference on Convergence and hybrid information technology
VirtualQWERTY: textual communication in virtual reality
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part II
Adaptive mixture-of-experts models for data glove interface with multiple users
Expert Systems with Applications: An International Journal
Two-handed gesture recognition and fusion with speech to command a robot
Autonomous Robots
VirtualPhonepad: a text input tool for virtual environments
ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
Advances in new interfaces for musical expression
SIGGRAPH Asia 2012 Courses
Creating new interfaces for musical expression
SIGGRAPH Asia 2013 Courses
Hidden Markov model for human to computer interaction: a study on human hand gesture recognition
Artificial Intelligence Review
Hi-index | 0.00 |
Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three-space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer