Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
A linear-time component-labeling algorithm using contour tracing technique
Computer Vision and Image Understanding
Automatic recognition of finger spelling for LIBRAS based on a two-layer architecture
Proceedings of the 2010 ACM Symposium on Applied Computing
A real time vision-based hand gestures recognition system
ISICA'10 Proceedings of the 5th international conference on Advances in computation and intelligence
Multi-resolution real-time stereo on commodity graphics hardware
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Interaction in Augmented Reality Environments Using Kinect
SVR '11 Proceedings of the 2011 XIII Symposium on Virtual Reality
Robust hand gesture recognition with kinect sensor
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Real time hand gesture recognition including hand segmentation and tracking
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part I
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Model-based design and generation of a gesture-based user interface navigation control
Proceedings of the 10th Brazilian Symposium on on Human Factors in Computing Systems and the 5th Latin American Conference on Human-Computer Interaction
MLDM'13 Proceedings of the 9th international conference on Machine Learning and Data Mining in Pattern Recognition
Hi-index | 0.00 |
In this paper we present a system - called Gesture User Interface (GestureUI) - to recognize static gestures of the Brazilian Sign Language (Libras) in real-time out of a continuous video stream using the depth information captured by a Kinect controller. We focus on handling small sets of gestures (A, E, I, O, U) and (B, C, F, L, V), processing them in two steps: Segmentation and Classification. For the Segmentation we propose a Virtual Wall and Libras-specific heuristics to improve the hand tracking. For the classification we use a Multi-Layer Perceptron trained by the system and present an arm cutting algorithm that improved the recognition rate from 67.4% and 75.4% to 100% for both gesture sets. Finally, we evaluated the processing performance of the overall system and proof that it is able to process frames at 62.5Hz with an Intel i7 processor, which is more than twice as fast as the Kinect frame capturing rate.