Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using anticipation to create believable behaviour
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Sensors in the wild: exploring electrodermal activity in child-robot interaction
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Towards affect sensitive and socially perceptive companions
Your Virtual Butler
Hi-index | 0.01 |
The Inter-ACT (INTEracting with Robots - Affect Context Task) corpus is an affective and contextually rich multimodal video corpus containing affective expressions of children playing chess with an iCat robot. It contains videos that capture the interaction from different perspectives and includes synchronised contextual information about the game and the behaviour displayed by the robot. The Inter-ACT corpus is mainly intended to be a comprehensive repository of naturalistic and contextualised, task-dependent data for the training and evaluation of an affect recognition system in an educational game scenario. The richness of contextual data that captures the whole human-robot interaction cycle, together with the fact that the corpus was collected in the same interaction scenario of the target application, make the Inter-ACT corpus unique in its genre.