Attention, intentions, and the structure of discourse
Computational Linguistics
Shared workspaces: how do they work and when are they useful?
International Journal of Man-Machine Studies
Participating in explanatory dialogues: interpreting and responding to questions in context
Participating in explanatory dialogues: interpreting and responding to questions in context
An artificial discourse language for collaborative negotiation
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Empirical methods for artificial intelligence
Empirical methods for artificial intelligence
Meta-dialogue behaviors: improving the efficiency of human-machine dialogue
Meta-dialogue behaviors: improving the efficiency of human-machine dialogue
Limited attention and discourse structure
Computational Linguistics
Propagating epistemic coordination through mutual defaults I
TARK '90 Proceedings of the 3rd conference on Theoretical aspects of reasoning about knowledge
Non-omniscient belief as context-based reasoning
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 1
The role of cognitive modeling in achieving communicative intentions
INLG '94 Proceedings of the Seventh International Workshop on Natural Language Generation
Empirical studies in discourse
Computational Linguistics
Taking Control of Redundancy in Scripted Tutorial Dialogue
Proceedings of the 2005 conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology
Hi-index | 0.00 |
Previous work suggests that reminding a conversational partner of mutually known information depends on the conversants' attentional state, their resource limits and the resource demands of the task. In this paper, we propose and evaluate several models of how an agent decides whether or not to communicate a reminder. We elaborate on previous findings by exploring how attentional state and resource bounds are incorporated into the decision making process so that reminders aid the performance of agents during collaborative problem solving. We test two main hypotheses using a multi-agent problem solving simulation testbed: (1) an agent decides to present salient knowledge only when it reduces overall problem solving effort (2) an agent can use its own attentional state as a model of the attentional state of its partner when assessing the effort trade-offs of communicating a reminder. Our results support both hypotheses, suggesting that the models we propose should be further tested for multi-agent communication in problem solving situations.