Multimodal presentation method for a dance training system

  • Authors:
  • Akio Nakamura;Sou Tabata;Tomoya Ueda;Shinichiro Kiyofuji;Yoshinori Kuno

  • Affiliations:
  • Saitama University, Saitama, JAPAN;Saitama University, Saitama, JAPAN;Saitama University, Saitama, JAPAN;Saitama University, Saitama, JAPAN;Saitama University, Saitama, JAPAN

  • Venue:
  • CHI '05 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a multimodal information presentation method for a basic dance training system. The system targets on beginners and enables them to learn basics of dances easily. One of the most effective ways of learning dances is to watch a video showing the performance of dance masters. However, some information cannot be conveyed well through video. One is the translational motion, especially that in the depth direction. We cannot tell exactly how far does the dancers move forward or backward. Another is the timing information. Although we can tell how to move our arms or legs from video, it is difficult to know when to start moving them. We solve the first issue by introducing an image display on a mobile robot. We can learn the amount of translation just by following the robot. We introduce active devices for the second issue. The active devices are composed of some vibro-motors and are developed to direct action-starting cues with vibration. Experimental results show the effectiveness of our multimodal information presentation method.