Information state and dialogue management in the TRINDI dialogue move engine toolkit
Natural Language Engineering
Plan revision in person-machine dialogue
EACL '89 Proceedings of the fourth conference on European chapter of the Association for Computational Linguistics
PARADISE: a framework for evaluating spoken dialogue agents
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Establishing and maintaining long-term human-computer relationships
ACM Transactions on Computer-Human Interaction (TOCHI)
Physical embodiments for mobile communication agents
Proceedings of the 18th annual ACM symposium on User interface software and technology
MPTrain: a mobile, music and physiology-based personal trainer
Proceedings of the 8th conference on Human-computer interaction with mobile devices and services
An architecture and applications for speech-based accessibility systems
IBM Systems Journal
MOPET: A context-aware and user-adaptive wearable system for fitness training
Artificial Intelligence in Medicine
A 'companion' ECA with planning and activity modelling
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
TripleBeat: enhancing exercise performance with persuasion
Proceedings of the 10th international conference on Human computer interaction with mobile devices and services
Integrating Planning and Dialogue in a Lifestyle Agent
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
A mobile health and fitness companion demonstrator
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics: Demonstrations Session
Design and validation of ECA gestures to improve dialogue system robustness
EmbodiedNLP '07 Proceedings of the Workshop on Embodied Language Processing
Hi-index | 0.00 |
Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. This paper describes how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. We present concrete system architectures, virtual, physical and mobile multimodal interfaces, and interaction management techniques for such Companions. In particular how knowledge representation and separation of low-level interaction modelling from high-level reasoning at the domain level makes it possible to implement distributed, but still coherent, interaction with Companions. The distribution is enabled by using a dialogue plan to communicate information from domain level planner to dialogue management and from there to a separate mobile interface. The model enables each part of the system to handle the same information from its own perspective without containing overlapping logic, and makes it possible to separate task-specific and conversational dialogue management from each other. In addition to technical descriptions, results from the first evaluations of the Companions interfaces are presented.