Two-handed gesture in multi-modal natural dialog
UIST '92 Proceedings of the 5th annual ACM symposium on User interface software and technology
Multi-modal HCI: combination of gesture and speech recognition
CHI '93 INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume I - Volume I
Gesture Modeling and Recognition Using Finite State Machines
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Hand Posture Classification and Recognition using the Modified Census Transform
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
New Interaction Concepts by Using the Wii Remote
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part IV
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.00 |
This paper proposes a novel multi-modal gesture recognition framework and introduces its application to continuous sign language recognition. A Hidden Markov Model is used to construct the audio feature classifier. A skeleton feature classifier is trained to provided complementary information based on the Dynamic Time Warping model. The confidence scores generated by two classifiers are firstly normalized and then combined to produce a weighted sum for the final recognition. Experimental results have shown that the precision and recall scores for 20 classes of our multi-modal recognition framework can achieve 0.8829 and 0.8890 respectively, which proves that our method is able to correctly reject false detection caused by single classifier. Our approach scored 0.12756 in mean Levenshtein distance and was ranked 1st in the Multi-modal Gesture Recognition Challenge in 2013.