How can i help you': comparing engagement classification strategies for a robot bartender
Proceedings of the 15th ACM on International conference on multimodal interaction
Building an automated engagement recognizer based on video analysis
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
Despite a large body of existing literature on automatic affect recognition, there seems to be a lack of studies investigating task and social context for the purpose of automatically predicting affect. This work aims to take the current state of the art a step forward and explore the role of task and social context and their interdependencies in the automatic prediction of user engagement in a HRI scenario involving an iCat robot playing chess with young children. We performed an experimental evaluation by training several SVMs-based models with different features extracted from a set of context logs collected in a HRI field experiment. The features include information about the game and the social context at the interaction level (overall features) and at the game turn level (turn-based features). While the overall features capture game and social context in an independent way at the interaction level, turn-based features attempt to encode the interdependencies of game and social context at each turn of the game. Results showed that game and social context-based features can be successfully used to predict engagement with the robot in the showcased scenario. Specifically, overall features proved more successful than turn-based features and game context-based features more effective than social context-based features. Finally the results demonstrated that the integration of game and social context-based features with features encoding their interdependencies leads to higher recognition performances.