Interacting with paper on the DigitalDesk
Communications of the ACM - Special issue on computer augmented environments: back to the real world
Toward a vision-based hand gesture interface
VRST '94 Proceedings of the conference on Virtual reality software and technology
Digital Image Processing
Inductive learning in hand pose recognition
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
A finger-mounted, direct pointing device for mobile computing
Proceedings of the 10th annual ACM symposium on User interface software and technology
Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
Vision-Based Gesture Recognition: A Review
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Hand Tracking Using Spatial Gesture Modeling and Visual Feedback for a Virtual DJ System
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Visual panel: virtual mouse, keyboard and 3D controller with an ordinary piece of paper
Proceedings of the 2001 workshop on Perceptive user interfaces
Hand Motion Gesture Frequency Properties and Multimodal Discourse Analysis
International Journal of Computer Vision
Dynamic hand gesture recognition using the skeleton of the hand
EURASIP Journal on Applied Signal Processing
Vision-based hand pose estimation: A review
Computer Vision and Image Understanding
The catchment feature model: a device for multimodal fusion and a bridge between signal and sense
EURASIP Journal on Applied Signal Processing
Binaural mixing using gestural control interaction
Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound
Humans and smart environments: a novel multimodal interaction approach
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Mixture models with skin and shadow probabilities for fingertip input applications
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
This article presents work on unencumbered hand-gesture interfaces. It encompasses both three-dimensional interaction and two-dimensional pointing. In three-dimensional gestures, the computational approach is motivated by a brief overview of human hand-gesture interaction. The model requires determining the gestural stroke, recognizing hand poses at the extrema of the stroke, and determining the dynamics of hand motion during the stroke. Work is presented on the inductive learning of hand-gesture poses using extended variable-valued logic and a rule-based induction algorithm. The author and his colleagues were able to attain a 94% recognition rate with their system. He discusses their work in the computation of the image flow fields representing the moving hand.Readers may contact Quek at the Electrical Engineering and Computer Science Dept., University of Illinois at Chicago, Chicago, Ill. 60607, e-mail quek@eecs.uic.edu.