Training for Physical Tasks in Virtual Environments: Tai Chi
VR '03 Proceedings of the IEEE Virtual Reality 2003
Tactile motion instructions for physical activities
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
TypeRight: a keyboard with tactile error prevention
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Saltate!: a sensor-based system to support dance beginners
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Learning basic dance choreographies with different augmented feedback modalities
CHI '10 Extended Abstracts on Human Factors in Computing Systems
The short-term effects of real-time virtual reality feedback on motor learning in dance
Presence: Teleoperators and Virtual Environments
Teach me to dance: exploring player experience and performance in full body dance games
Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology
An investigation into the use of tactile instructions in snowboarding
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
A system for practicing formations in dance performance supported by self-propelled screen
Proceedings of the 4th Augmented Human International Conference
Setting the best view of a virtual teacher in a mixed reality physical-task learning support system
Journal of Systems and Software
Hi-index | 0.00 |
This paper presents a multimodal information presentation method for a basic dance training system. The system targets on beginners and enables them to learn basics of dances easily. One of the most effective ways of learning dances is to watch a video showing the performance of dance masters. However, some information cannot be conveyed well through video. One is the translational motion, especially that in the depth direction. We cannot tell exactly how far does the dancers move forward or backward. Another is the timing information. Although we can tell how to move our arms or legs from video, it is difficult to know when to start moving them. We solve the first issue by introducing an image display on a mobile robot. We can learn the amount of translation just by following the robot. We introduce active devices for the second issue. The active devices are composed of some vibro-motors and are developed to direct action-starting cues with vibration. Experimental results show the effectiveness of our multimodal information presentation method.