mimicat: face input interface supporting animatronics costume performer's facial expression

  • Authors:
  • Rika Shoji;Toshiki Yoshiike;Yuya Kikukawa;Tadahiro Nishikawa;Taigetsu Saori;Suketomo Ayaka;Tetsuaki Baba;Kumiko Kushiyama

  • Affiliations:
  • Tokyo Metropolitan University, Hino, Tokyo, Japan;Tokyo Metropolitan University, Hino, Tokyo, Japan;Tokyo Metropolitan University, Hino, Tokyo, Japan;Tokyo Metropolitan University, Hino, Tokyo, Japan;Tokyo Metropolitan University, Hino, Tokyo, Japan;Tokyo Metropolitan University, Hino, Tokyo, Japan;Tokyo Metropolitan University, Hino, Tokyo, Japan;Tokyo Metropolitan University, Hino, Tokyo, Japan

  • Venue:
  • ACM SIGGRAPH 2012 Posters
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Today a character costume can be seen in many places, such as amusement facilities, sport stadium and so on. They perform comical and funny body action for us. In general, the performers can't control their costume's facial expression. We developed "mimicat" that can synchronizing performer's facial action and costume's one. A character costume performer can do more comical action by using mimicat. At first our motivation is combining animatronics and face and expression recognition.