GestureMan: a mobile robot that embodies a remote instructor's actions

  • Authors:
  • Hideaki Kuzuoka;Shinya Oyama;Keiichi Yamazaki;Kenji Suzuki;Mamoru Mitsuishi

  • Affiliations:
  • Inst. of Engineering Mechanics and Systems, University of Tsukuba, 1-1-1 Tennoudai, Tsukuba, Ibaraki, Japan;Inst. of Engineering Mechanics and Systems, University of Tsukuba, 1-1-1 Tennoudai, Tsukuba, Ibaraki, Japan;Department of Liberal Arts, Saitama University, 255 Shimo-Ookubo, Urawa, Saitama 338, Japan;Intelligent Communications Division, Communications Research Laboratory, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo, Japan;Department of Engineering Synthesis, Faculty of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan

  • Venue:
  • CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

When designing systems that support remote instruction on physical tasks, one must consider four requirements: 1) participants should be able to use non-verbal expressions, 2) they must be able to take an appropriate body arrangement to see and show gestures, 3) the instructor should be able to monitor operators and objects, 4) they must be able to organize the arrangement of bodies and tools and gestural expression sequentially and interactively. GestureMan was developed to satisfy these four requirements by using a mobile robot that embodies a remote instructors actions. The mobile robot mounts a camera and a remote control laser pointer on it. Based on the experiments with the system we discuss the advantage and disadvantage of the current implementation. Also, some implications to improve the system are described.