Motivated reinforcement learning for adaptive characters in open-ended simulation games

  • Authors:
  • Kathryn Elizabeth Merrick;Mary Lou Maher

  • Affiliations:
  • University of Sydney, Sydney, Australia;University of Sydney, Sydney, Australia

  • Venue:
  • Proceedings of the international conference on Advances in computer entertainment technology
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently a new generation of virtual worlds has emerged in which users are provided with open-ended modelling tools with which they can create and modify world content. The result is evolving virtual spaces for commerce, education and social interaction. In general, these virtual worlds are not games and have no concept of winning, however the open-ended modelling capacity is nonetheless compelling. The rising popularity of open-ended virtual worlds suggests that there may also be potential for a new generation of computer games situated in open-ended environments. A key issue with the development of such games, however, is the design of non-player characters which can respond autonomously to unpredictable, open-ended changes to their environment. This paper considers the impact of open-ended modelling on character development in simulation games. Motivated reinforcement learning using context-free grammars is proposed as a means of representing unpredictable, evolving worlds for character reasoning. This technique is used to design adaptive characters for the Second Life virtual world to create a new kind of open-ended simulation game.