Motivated reinforcement learning for non-player characters in persistent computer game worlds

  • Authors:
  • Kathryn Merrick;Mary Lou Maher

  • Affiliations:
  • University of Sydney and National ICT Australia, Alexandria, NSW;University of Sydney, NSW

  • Venue:
  • Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Massively multiplayer online computer games are played in complex, persistent virtual worlds. Over time, the landscape of these worlds evolves and changes as players create and personalise their own virtual property. In contrast, many non-player characters that populate virtual game worlds possess a fixed set of pre-programmed behaviours and lack the ability to adapt and evolve in time with their surroundings. This paper presents motivated reinforcement learning agents as a means of creating non-player characters that can both evolve and adapt. Motivated reinforcement learning agents explore their environment and learn new behaviours in response to interesting experiences, allowing them to display progressively evolving behavioural patterns. In dynamic worlds, environmental changes provide an additional source of interesting experiences triggering further learning and allowing the agents to adapt their existing behavioural patterns in time with their surroundings.