Large vocabulary sign language recognition based on hierarchical decision trees
Proceedings of the 5th international conference on Multimodal interfaces
Hand posture matching for Irish Sign language interpretation
ISICT '03 Proceedings of the 1st international symposium on Information and communication technologies
On the usability of gesture interfaces in virtual reality environments
CLIHC '05 Proceedings of the 2005 Latin American conference on Human-computer interaction
FOCUS: a generalized method for object discovery for robots that observe and interact with humans
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Turning a page on the digital annotation of physical books
Proceedings of the 2nd international conference on Tangible and embedded interaction
Definition and recovery of kinematic features for recognition of American sign language movements
Image and Vision Computing
Searching for Complex Human Activities with No Visual Examples
International Journal of Computer Vision
HAID '08 Proceedings of the 3rd international workshop on Haptic and Audio Interaction Design
Gesture Recognition for a Webcam-Controlled First Person Shooter
ISVC '08 Proceedings of the 4th International Symposium on Advances in Visual Computing, Part II
Computer Vision and Image Understanding
Coupled grouping and matching for sign and gesture recognition
Computer Vision and Image Understanding
Histogram of oriented rectangles: A new pose descriptor for human action recognition
Image and Vision Computing
SOMM: Self organizing Markov map for gesture recognition
Pattern Recognition Letters
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
An ambient software monitoring system for unsupervised user modelling
Expert Systems with Applications: An International Journal
CIVR'03 Proceedings of the 2nd international conference on Image and video retrieval
Probabilistic video-based gesture recognition using self-organizing feature maps
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Comparison of human and machine recognition of everyday human actions
ICDHM'07 Proceedings of the 1st international conference on Digital human modeling
Human action recognition using distribution of oriented rectangular patches
Proceedings of the 2nd conference on Human motion: understanding, modeling, capture and animation
Real-time 3D pointing gesture recognition for mobile robots with cascade HMM and particle filter
Image and Vision Computing
Efficacy of gesture for communication among humanoid robots by fuzzy inference method
International Journal of Computational Vision and Robotics
Recognition of isolated complex mono- and bi-manual 3D hand gestures
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Robust real-time human activity recognition from tracked face displacements
EPIA'05 Proceedings of the 12th Portuguese conference on Progress in Artificial Intelligence
Feature fusion for 3D hand gesture recognition by learning a shared hidden space
Pattern Recognition Letters
Dimension reduction in 3d gesture recognition using meshless parameterization
PSIVT'06 Proceedings of the First Pacific Rim conference on Advances in Image and Video Technology
International Journal of Computational Vision and Robotics
Dynamic hand gesture recognition: An exemplar-based approach from motion divergence fields
Image and Vision Computing
Multimedia Tools and Applications
Locally regularized sliced inverse regression based 3D hand gesture recognition on a dance robot
Information Sciences: an International Journal
Hand Gesture Recognition Using Multivariate Fuzzy Decision Tree and User Adaptation
International Journal of Fuzzy System Applications
Fusing multi-modal features for gesture recognition
Proceedings of the 15th ACM on International conference on multimodal interaction
Gesture-based interaction with voice feedback for a tour-guide robot
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
We propose a state based approach to gesture learning and recognition. Using Soatial Clustering and Temporal alignment, each gesture is defined to be an ordered sequence of states in spatial-temporal space. The 2D image positions of the centers of the head and both hands of the user are used as features; these are located by a color based tracking method. From training data of a given gesture, we first learn the spatial information and then group the data into segments that are automatically aligned temporally. The temporal information is further integrated to build a Finite State Machine (FSM) recognizer. Each gesture has a FSM corresponding to it. The computational efficiency of the FSM recognizers allows us to achieve real-time on-line performance. We apply this technique to build an experimental system that plays a game of "Simon Says" with the user.