Facial emotion and gesture reproduction method for substitute robot of remote person

  • Authors:
  • Kenzo Kurihara;Daisuke Sugiyama;Shigeru Matsumoto;Nobuyuki Nishiuchi;Kazuaki Masuda

  • Affiliations:
  • Department of Information Systems Creation, Kanagawa University, 3-27-1, RokkakubashI, Kanagawa-ku, Yokohama 221-8686, Japan;Department of Information Systems Creation, Kanagawa University, 3-27-1, RokkakubashI, Kanagawa-ku, Yokohama 221-8686, Japan;Department of Information Systems Creation, Kanagawa University, 3-27-1, RokkakubashI, Kanagawa-ku, Yokohama 221-8686, Japan;Faculty of System Design, Tokyo Metropolitan University, 6-6, Asahigaoka, Hino, Tokyo 191-0065, Japan;Department of Information Systems Creation, Kanagawa University, 3-27-1, RokkakubashI, Kanagawa-ku, Yokohama 221-8686, Japan

  • Venue:
  • Computers and Industrial Engineering
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

CEOs of big companies may travel frequently to give their philosophies and policies to the employees who are working at world wide branches. Video technology makes it possible to give their lectures anywhere and anytime in the world very easily. However, 2-dimentional video systems lack the reality. If we can give natural realistic lectures through humanoid robots, CEOs do not need to meet the employees in person. They can save their time and money for traveling. We propose a substitute robot of remote person. The substitute robot is a humanoid robot that can reproduce the lecturers' facial expressions and body movements, and that can send the lecturers to everywhere in the world instantaneously with the feeling of being at a live performance. There are two major tasks for the development; they are the facial expression recognition/reproduction and the body language reproduction. For the former task, we proposed a facial expression recognition method based on a neural network model. We recognized five emotions, or surprise, anger, sadness, happiness and no emotion, in real time. We also developed a facial robot to reproduce the recognized emotion on the robot face. Through experiments, we showed that the robot could reproduce the speakers' emotions with its face. For the latter task, we proposed a degradation control method to reproduce the natural movement of the lecturer even when a robot rotary joint fails. For the fundamental stage of our research for this sub-system, we proposed a control method for the front view movement model, or 2-dimentional model.