Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using Multiple Sensors for Mobile Sign Language Recognition
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
ICARE: a component-based approach for the design and development of multimodal interfaces
CHI '04 Extended Abstracts on Human Factors in Computing Systems
ICARE software components for rapidly developing multimodal interfaces
Proceedings of the 6th international conference on Multimodal interfaces
ICARE: a component-based approach for multimodal interaction
UbiMob '04 Proceedings of the 1st French-speaking conference on Mobility and ubiquity computing
Representing Honey Bee Behavior for Recognition Using Human Trainable Models
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
American sign language recognition in game development for deaf children
Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
Eyepatch: prototyping camera-based interaction through examples
Proceedings of the 20th annual ACM symposium on User interface software and technology
User evaluation of OIDE: a rapid prototyping platform for multimodal interaction
Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
Rapid prototyping of smart garments for activity-aware applications
Journal of Ambient Intelligence and Smart Environments
MAGIC: a motion gesture design tool
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
GART: the gesture and activity recognition toolkit
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
Tracking free-weight exercises
UbiComp '07 Proceedings of the 9th international conference on Ubiquitous computing
mCube: towards a versatile gesture input device for ubiquitous computing environments
UCS'07 Proceedings of the 4th international conference on Ubiquitous computing systems
ERCIM'06 Proceedings of the 9th conference on User interfaces for all
Multi-layered hand and face tracking for real-time gesture recognition
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
A framework for constructing entertainment contents using flash and wearable sensors
ICEC'10 Proceedings of the 9th international conference on Entertainment computing
Tool support for testing complex multi-touch gestures
ACM International Conference on Interactive Tabletops and Surfaces
Towards a one-way American sign language translator
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Recognizing song-based blink patterns: applications for restricted and universal access
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
American sign language recognition with the kinect
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Tool use as gesture: new challenges for maintenance and rehabilitation
BCS '10 Proceedings of the 24th BCS Interaction Specialist Group Conference
Rapid prototyping of smart garments for activity-aware applications
Journal of Ambient Intelligence and Smart Environments
Designing graphical user interfaces integrating gestures
Proceedings of the 30th ACM international conference on Design of communication
Accelerometers data interoperability: easing interactive applications development
Proceedings of the 18th Brazilian symposium on Multimedia and the web
CrowdLearner: rapidly creating mobile recognizers using crowdsourcing
Proceedings of the 26th annual ACM symposium on User interface software and technology
The Journal of Machine Learning Research
ACM Transactions on Interactive Intelligent Systems (TiiS)
Hi-index | 0.00 |
Gesture recognition is becoming a more common interaction tool in the fields of ubiquitous and wearable computing. Designing a system to perform gesture recognition, however, can be a cumbersome task. Hidden Markov models (HMMs), a pattern recognition technique commonly used in speech recognition, can be used for recognizing certain classes of gestures. Existing HMM toolkits for speech recognition can be adapted to perform gesture recognition, but doing so requires significant knowledge of the speech recognition literature and its relation to gesture recognition. This paper introduces the Georgia Tech Gesture Toolkit GT2k which leverages Cambridge University's speech recognition toolkit, HTK, to provide tools that support gesture recognition research. GT2k provides capabilities for training models and allows for both real--time and off-line recognition. This paper presents four ongoing projects that utilize the toolkit in a variety of domains.