The automatic recognition of gestures
The automatic recognition of gestures
An HMM-Based Threshold Model Approach for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognition-based gesture spotting in video games
Pattern Recognition Letters
Affective multimodal human-computer interaction
Proceedings of the 13th annual ACM international conference on Multimedia
An immersive game using a new interface: the well-tep
Edutainment'07 Proceedings of the 2nd international conference on Technologies for e-learning and digital entertainment
Early recognition based on co-occurrence of gesture patterns
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Multifactor feature extraction for human movement recognition
Computer Vision and Image Understanding
Gesture-based user interfaces for public spaces
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: users diversity - Volume Part II
Hi-index | 0.00 |
Vision-based interfaces pose a tempting alternative to physical interfaces. Intuitive and multi-purpose, these interfaces could allow people to interact with computer naturally and effortlessly. The existing various vision-based interfaces are hard to apply in reality since it has many environmental constraints. In this paper, we introduce a vision-based game interface which is robust in varying environments. This interface consists of three main modules: body-parts localization, pose classification and gesture recognition. Firstly, body-part localization module determines the locations of body parts such as face and hands automatically. For this, we extract body parts using SCI-color model, human physical character and heuristic information. Subsequently, pose classification module classifies the positions of detected body parts in a frame into a pose according to Euclidean distance between the input positions and predefined poses. Finally, gesture recognition module extracts a sequence of poses corresponding to the gestures from the successive frames, and translates that sequence into the game commands using a HMM. To assess the effectiveness of the proposed interface, it has been tested with a popular computer game, Quake II, and the results confirm that the vision-based interface facilitates more natural and friendly communication while controlling the game.