Gestures with speech for graphic manipulation
International Journal of Man-Machine Studies
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
A State-Based Approach to the Representation and Recognition of Gesture
IEEE Transactions on Pattern Analysis and Machine Intelligence
ACM Computing Surveys (CSUR)
Gesture Modeling and Recognition Using Finite State Machines
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Training for Physical Tasks in Virtual Environments: Tai Chi
VR '03 Proceedings of the IEEE Virtual Reality 2003
Virtual training via vibrotactile arrays
Presence: Teleoperators and Virtual Environments
A system for medical consultation and education using multimodal human/machine communication
IEEE Transactions on Information Technology in Biomedicine
Multimodal motion guidance: techniques for adaptive and dynamic feedback
Proceedings of the 14th ACM international conference on Multimodal interaction
Guided latent space regression for human motion generation
Robotics and Autonomous Systems
ACM Transactions on Interactive Intelligent Systems (TiiS)
Hi-index | 0.00 |
This paper presents a multimodal system capable to understand and correct in real-time the movements of Tai-Chi students through the integration of audio-visual-tactile technologies. This platform acts like a virtual teacher that transfers the knowledge of five Tai-Chi movements using feed-back stimuli to compensate the errors committed by a user during the performance of the gesture. The fundamental components of this multimodal interface are the gesture recognition system (using k-means clustering, Probabilistic Neural Networks (PNN) and Finite State Machines (FSM)) and the real-time descriptor of motion which is used to compute and qualify the actual movements performed by the student respect to the movements performed by the master, obtaining several feedbacks and compensating this movement in real-time varying audio-visualtactile parameters of different devices. The experiments of this multimodal platform have confirmed that the quality of the movements performed by the students is improved significantly.