Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
Networked reminiscence therapy for individuals with dementia by using photo and video sharing
Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
Automatic prediction of frustration
International Journal of Human-Computer Studies
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Interaction Reproducing Model: A Model for Giving Supports Appropriate to User State
PCM '08 Proceedings of the 9th Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
Attention monitoring for music contents based on analysis of signal-behavior structures
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part I
Hi-index | 0.00 |
In this paper, we propose a method to estimate such user conversational states as concentrating/not concentrating. We previously proposed a robot-assisted videophone system to sustain conversations between elderly people. In such video-phone systems, the user conversational situation must be estimated so that the robot behaves appropriately. The proposed method employs i) elemental actions and a combination of user elemental actions as features for recognition and ii) the normalization of feature vectors based on the frequencies of actions. The experimental results show the effectiveness of our method.