Virtual reality, art, and entertainment
Presence: Teleoperators and Virtual Environments - Premier issue
Hamlet on the Holodeck: The Future of Narrative in Cyberspace
Hamlet on the Holodeck: The Future of Narrative in Cyberspace
Managing interaction between users and agents in a multi-agent storytelling environment
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Guiding interactive drama
Reinforcement learning for declarative optimization-based drama management
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Autonomous nondeterministic tour guides: improving quality of experience with TTD-MDPs
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
A globally optimal algorithm for TTD-MDPs
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Another look at search-based drama management
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
Targeting specific distributions of trajectories in MDPs
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Authorial idioms for target distributions in TTD-MDPs
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
A drama manager (DM) monitors an interactive experience, such as a computer game, and intervenes to shape the global experience so it satisfies the author's expressive goals without decreasing a player's interactive agency. In declarative optimization-based drama management (DODM), the author declaratively specifies desired properties of the experience; the DM optimizes its interventions to maximize that metric. The initial DODM approach used online search to optimize an experience-quality function. Subsequent work questioned whether online search could perform well in general, and proposed alternative optimization frameworks such as reinforcement learning. Recent work on targeted trajectory distribution Markov decision processes (TTD-MDPs) replaced the experience-quality metric with a metric and associated algorithm based on targeting experience distributions. We argue that optimizing an experience-quality function does not destroy interactive agency, as has been claimed, and that in fact it can capture that goal directly. We further show that, though apparently quite different on the surface, the original search approach and TTD-MDPs actually use variants of the same underlying search algorithm, and that offline cached search, as is done by the TTD-MDP algorithm, allows the search-based systems to achieve similar results to TTD-MDPs.