Integrating pedagogical capabilities in a virtual environment agent
AGENTS '97 Proceedings of the first international conference on Autonomous agents
Fitting and Compilation of Multiagent Models through Piecewise Linear Functions
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
THESPIAN: An Architecture for Interactive Pedagogical Drama
Proceedings of the 2005 conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology
PsychSim: modeling theory of mind with decision-theoretic agents
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Planning and acting in partially observable stochastic domains
Artificial Intelligence
BiLAT: A Game-Based Environment for Practicing Negotiation in a Cultural Context
International Journal of Artificial Intelligence in Education
Negotiations in the context of AIDS prevention: an agent-based model using theory of mind
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Socially optimized learning in virtual environments (SOLVE)
ICIDS'11 Proceedings of the 4th international conference on Interactive Digital Storytelling
Hi-index | 0.00 |
Advances in multiagent systems have led to their successful application in experiential training simulations, where students learn by interacting with agents who represent people, groups, structures, etc. These multiagent simulations must model the training scenario so that the students' success is correlated with the degree to which they follow the intended pedagogy. As these simulations increase in size and richness, it becomes harder to guarantee that the agents accurately encode the pedagogy. Testing with human subjects provides the most accurate feedback, but it can explore only a limited subspace of simulation paths. In this paper, we present a mechanism for using human data to verify the degree to which the simulation encodes the intended pedagogy. Starting with an analysis of data from a deployed multiagent training simulation, we then present an automated mechanism for using the human data to generate a distribution appropriate for sampling simulation paths. By generalizing from a small set of human data, the automated approach can systematically explore a much larger space of possible training paths and verify the degree to which a multiagent training simulation adheres to its intended pedagogy.