Statecharts: A visual formalism for complex systems
Science of Computer Programming
Information state and dialogue management in the TRINDI dialogue move engine toolkit
Natural Language Engineering
A conversational agent as museum guide: design and evaluation of a real-world application
Lecture Notes in Computer Science
The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfaces
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
An affective guide robot in a shopping mall
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
The RavenClaw dialog management framework: Architecture and systems
Computer Speech and Language
Face detection and tracking in video sequences using the modifiedcensus transformation
Image and Vision Computing
Ada and grace: toward realistic and engaging virtual museum guides
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Facilitating multiparty dialog with gaze, gesture, and speech
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections
ACM Transactions on Interactive Intelligent Systems (TiiS)
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Reinforcement Learning for Adaptive Dialogue Systems: A Data-driven Methodology for Dialogue Management and Natural Language Generation
Furhat: a back-projected human-like robot head for multiparty human-machine interaction
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
Multimodal multiparty social interaction with the furhat head
Proceedings of the 14th ACM international conference on Multimodal interaction
Towards developing a model for group involvement and individual engagement
Proceedings of the 15th ACM on International conference on multimodal interaction
Spontaneous spoken dialogues with the furhat human-like robot head
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
In this paper, we present IrisTK - a toolkit for rapid development of real-time systems for multi-party face-to-face interaction. The toolkit consists of a message passing system, a set of modules for multi-modal input and output, and a dialog authoring language based on the notion of statecharts. The toolkit has been applied to a large scale study in a public museum setting, where the back-projected robot head Furhat interacted with the visitors in multi-party dialog.