Autonomous Agents and Multi-Agent Systems
Model-Based Object Pose in 25 Lines of Code
ECCV '92 Proceedings of the Second European Conference on Computer Vision
A New Approach to Object-Oriented Middleware
IEEE Internet Computing
MULTIPLATFORM testbed: an integration platform for multimodal dialog systems
SEALTS '03 Proceedings of the HLT-NAACL 2003 workshop on Software engineering and architecture of language technology systems - Volume 8
SmartKom: Foundations of Multimodal Dialogue Systems (Cognitive Technologies)
SmartKom: Foundations of Multimodal Dialogue Systems (Cognitive Technologies)
Multimodal generation in the COMIC dialogue system
ACLdemo '05 Proceedings of the ACL 2005 on Interactive poster and demonstration sessions
Automatic prediction of frustration
International Journal of Human-Computer Studies
Playing with virtual peers: bootstrapping contingent discourse in children with autism
ICLS'08 Proceedings of the 8th international conference on International conference for the learning sciences - Volume 2
ISVC '09 Proceedings of the 5th International Symposium on Advances in Visual Computing: Part II
The Knowledge Engineering Review
Development of a software-based social tutor for children with autism spectrum disorders
OZCHI '09 Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group: Design: Open 24/7
The SuperCollider Book
Feeling and reasoning: a computational model for emotional characters
EPIA'05 Proceedings of the 12th Portuguese conference on Progress in Artificial Intelligence
Social communication between virtual characters and children with autism
AIED'11 Proceedings of the 15th international conference on Artificial intelligence in education
International Journal of Technology Enhanced Learning
Proceedings of the 12th International Conference on Interaction Design and Children
Hi-index | 0.00 |
The development of social communication skills in children relies on multimodal aspects of communication such as gaze, facial expression, and gesture. We introduce a multimodal learning environment for social skills which uses computer vision to estimate the children's gaze direction, processes gestures from a large multi-touch screen, estimates in real time the affective state of the users, and generates interactive narratives with embodied virtual characters. We also describe how the structure underlying this system is currently being extended into a general framework for the development of interactive multimodal systems.