An approach to natural gesture in virtual environments
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on virtual reality software and technology
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Exploiting prosodic structuring of coverbal gesticulation
Proceedings of the 6th international conference on Multimodal interfaces
Toward subtle intimate interfaces for mobile devices using an EMG controller
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gesture recognition in flow based on PCA analysis using multiagent system
ACE '08 Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology
Enabling always-available input with muscle-computer interfaces
Proceedings of the 22nd annual ACM symposium on User interface software and technology
A framework for continuous multimodal sign language recognition
Proceedings of the 2009 international conference on Multimodal interfaces
Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
CISIS '12 Proceedings of the 2012 Sixth International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS)
Hi-index | 0.00 |
In the last years, gesture recognition has gained increased attention in Human-Computer Interaction community. However, gesture segmentation, which is one of the most challenging tasks in gesture recognition applications, is still an open issue. Gesture segmentation has two main objectives: first, detecting when a gesture begins and ends; second, recognizing whether a gesture is meant to be meaningful for the machine or is a non-command gesture (such as gesticulation). This paper proposes a novel test protocol for the evaluation of different techniques separating command gestures from non-command gestures. Finally, we show how we adapted adopted our test protocol to design a touchless, always available interaction system, in which the user communicates directly with the computer through a wearable and "intimate" interface based on electromyographic signals.