Virtual reality, art, and entertainment
Presence: Teleoperators and Virtual Environments - Premier issue
Managing interaction between users and agents in a multi-agent storytelling environment
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Guiding interactive drama
Autonomous nondeterministic tour guides: improving quality of experience with TTD-MDPs
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
A globally optimal algorithm for TTD-MDPs
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Targeting specific distributions of trajectories in MDPs
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Authorial idioms for target distributions in TTD-MDPs
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Another look at search-based drama management
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Computational influence for training and entertainment
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Distributed drama management: beyond double appraisal in emergent narrative
ICIDS'12 Proceedings of the 5th international conference on Interactive Storytelling
Reading again for the first time: a model of rereading in interactive stories
ICIDS'12 Proceedings of the 5th international conference on Interactive Storytelling
Hi-index | 0.00 |
A drama manager (DM) is a system that monitors an interactive experience, such as a computer game, and intervenes to keep the global experience in line with the author's goals without decreasing a player's interactive agency. In declarative optimization-based drama management (DODM), an author declaratively specifies desired properties of the experience; the DM intervenes in a way that optimizes the specified metric. The initial DODM approach used online search to optimize an experience-quality function. Later work questioned both online search as a technical approach and the experience-quality optimization framework. Recent work on targeted trajectory distribution Markov decision processes (TTD-MDPs) replaced the experience-quality metric with a metric and associated algorithm based on targeting experience distributions. We show that, though apparently quite different on the surface, the original optimization formulation and TTD-MDPs are actually variants of the same underlying search algorithm, and that offline cached search, as is done by the TTD-MDP algorithm, allows the original search-based systems to achieve similar results to TTD-MDPs. Furthermore, we argue that the original idea of optimizing an experience-quality function does not destroy interactive agency, as had previously been argued, and that in fact it can capture that goal directly.