Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
The go-go interaction technique: non-linear mapping for direct manipulation in VR
Proceedings of the 9th annual ACM symposium on User interface software and technology
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Scale detection for a priori gesture recognition
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
AirStroke: bringing unistroke text entry to freehand gesture interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Exploring the potential for touchless interaction in image-guided interventional radiology
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Continuous recognition and visualization of pen strokes and touch-screen gestures
Proceedings of the Eighth Eurographics Symposium on Sketch-Based Interfaces and Modeling
Real-time human pose recognition in parts from single depth images
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Freehand gestural text entry for interactive TV
Proceedings of the 11th european conference on Interactive TV and video
Hi-index | 0.00 |
In this paper we present a new bimanual markerless gesture interface for 3D full-body motion tracking sensors, such as the Kinect. Our interface uses a probabilistic algorithm to incrementally predict users' intended one-handed and twohanded gestures while they are still being articulated. It supports scale and translation invariant recognition of arbitrarily defined gesture templates in real-time. The interface supports two ways of gesturing commands in thin air to displays at a distance. First, users can use one-handed and two-handed gestures to directly issue commands. Second, users can use their non-dominant hand to modulate single-hand gestures. Our evaluation shows that the system recognizes one-handed and two-handed gestures with an accuracy of 92.7%--96.2%.