Another look at search-based drama management

  • Authors:
  • Mark J. Nelson;Michael Mateas

  • Affiliations:
  • College of Computing, Georgia Institute of Technology;Computer Science Department, University of California, Santa Cruz

  • Venue:
  • AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A drama manager (DM) monitors an interactive experience, such as a computer game, and intervenes to shape the global experience so it satisfies the author's expressive goals without decreasing a player's interactive agency. In declarative optimization-based drama management (DODM), the author declaratively specifies desired properties of the experience; the DM optimizes its interventions to maximize that metric. The initial DODM approach used online search to optimize an experience-quality function. Subsequent work questioned whether online search could perform well in general, and proposed alternative optimization frameworks such as reinforcement learning. Recent work on targeted trajectory distribution Markov decision processes (TTD-MDPs) replaced the experience-quality metric with a metric and associated algorithm based on targeting experience distributions. We argue that optimizing an experience-quality function does not destroy interactive agency, as has been claimed, and that in fact it can capture that goal directly. We further show that, though apparently quite different on the surface, the original search approach and TTD-MDPs actually use variants of the same underlying search algorithm, and that offline cached search, as is done by the TTD-MDP algorithm, allows the search-based systems to achieve similar results to TTD-MDPs.