Estimation of user conversational states based on combination of user actions and feature normalization

  • Authors:
  • Hirotake Yomazoe;Yuichi Koyama;Tomoko Yonezawa;Shinji Abe;Kenji Mase

  • Affiliations:
  • ATR IRC Lab., Hikaridai, Seikacho, Kyoto, Japan;ATR/Nagoya University, Hikaridai, Seikacho, Kyoto, Japan;ATR IRC Lab., Hikaridai, Seikacho, Kyoto, Japan;ATR IRC Lab., Hikaridai, Seikacho, Kyoto, Japan;Nagoya University, Furocho, Chikusaku, Nagoya, Japan

  • Venue:
  • CASEMANS '11 Proceedings of the 5th ACM International Workshop on Context-Awareness for Self-Managing Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a method to estimate such user conversational states as concentrating/not concentrating. We previously proposed a robot-assisted videophone system to sustain conversations between elderly people. In such video-phone systems, the user conversational situation must be estimated so that the robot behaves appropriately. The proposed method employs i) elemental actions and a combination of user elemental actions as features for recognition and ii) the normalization of feature vectors based on the frequencies of actions. The experimental results show the effectiveness of our method.