Virtual reality, art, and entertainment
Presence: Teleoperators and Virtual Environments - Premier issue
Practical Issues in Temporal Difference Learning
Machine Learning
Temporal difference learning and TD-Gammon
Communications of the ACM
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Guiding interactive drama
Autonomous nondeterministic tour guides: improving quality of experience with TTD-MDPs
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
A globally optimal algorithm for TTD-MDPs
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Using influence and persuasion to shape player experiences
Proceedings of the 2009 ACM SIGGRAPH Symposium on Video Games
Targeting specific distributions of trajectories in MDPs
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Authorial idioms for target distributions in TTD-MDPs
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Another look at search-based drama management
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Computational influence for training and entertainment
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Investigating director agents' decision making in interactive narrative: a Wizard-of-Oz study
Proceedings of the Intelligent Narrative Technologies III Workshop
A framework for narrative adaptation in interactive story-based learning environments
Proceedings of the Intelligent Narrative Technologies III Workshop
Evaluating directorial control in a character-centric interactive narrative framework
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Importance of well-motivated characters in interactive narratives: an empirical evaluation
ICIDS'10 Proceedings of the Third joint conference on Interactive digital storytelling
A simple intensity-based drama manager
ICIDS'10 Proceedings of the Third joint conference on Interactive digital storytelling
Textual vs. graphical interaction in an interactive fiction game
ICIDS'10 Proceedings of the Third joint conference on Interactive digital storytelling
A method for transferring probabilistic user models between environments
ICIDS'11 Proceedings of the 4th international conference on Interactive Digital Storytelling
A new approach to social behavior simulation: the mask model
ICIDS'11 Proceedings of the 4th international conference on Interactive Digital Storytelling
Director agent intervention strategies for interactive narrative environments
ICIDS'11 Proceedings of the 4th international conference on Interactive Digital Storytelling
Efficient intent-based narrative generation using multiple planning agents
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Hi-index | 0.00 |
A long-standing challenge in interactive entertainment is the creation of story-based games with dynamically responsive story-lines. Such games are populated by multiple objects and autonomous characters, and must provide a coherent story experience while giving the player freedom of action. To maintain coherence, the game author must provide for modifying the world in reaction to the player's actions, directing agents to act in particular ways (overriding or modulating their autonomy), or causing inanimate objects to reconfigure themselves "behind the player's back".Declarative optimization-based drama management is one mechanism for allowing the game author to specify a drama manager (DM) to coordinate these modifications, along with a story the DM should aim for. The premise is that the author can easily describe the salient properties of the story while leaving it to the DM to react to the player and direct agent actions. Although promising, early search-based approaches have been shown to scale poorly. Here, we improve upon the state of the art by using reinforcement learning and a novel training paradigm to build an adaptive DM that manages the tradeoff between exploration and story coherence. We present results on two games and compare our performance with other approaches.