Toward conversational human-computer interaction
AI Magazine
Integrating and Reusing GUI-Driven Applications
ICSR-7 Proceedings of the 7th International Conference on Software Reuse: Methods, Techniques, and Tools
Children's and adults' multimodal interaction with 2D conversational agents
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Evaluation of multimodal behaviour of embodied agents
From brows to trust
Ordinary User Oriented Model Construction for Assisting Conversational Agents
WI-IATW '06 Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology
Hi-index | 0.00 |
A Conversational Agent can be useful for providing assistance to naïve users on how to use a graphical interface. Such an assistant requires three features: understanding users' requests, reasoning, and intuitive output. In this paper we introduce the DAFT-LEA architecture for enabling assistant agents to reply to questions asked by naive users about the structure and functioning of graphical interfaces. This architecture integrates via a unified software engineering approach a linguistic parser for the understanding the user's requests, a rational agent for the reasoning about the graphical application, and a 2D cartoon like agent for the multimodal output. We describe how it has been applied to three different assistance application contexts, and how it was incrementally defined via the collection of a corpus of users' requests for assistance. Such an approach can be useful for the design of other assistance applications since it enables a clear separation between the original graphical application, its abstract DAFT model and the linguistic processing of users' requests.