Flocks, herds and schools: A distributed behavioral model
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
Autonomy, interaction, and presence
Presence: Teleoperators and Virtual Environments - Premier issue
Technical Note: \cal Q-Learning
Machine Learning
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Reinforcement Learning
Human-Level AI's Killer Application: Interactive Computer Games
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
What''s interesting?
Programming Believable Characters for Computer Games (Game Development Series)
Programming Believable Characters for Computer Games (Game Development Series)
Agent Models for Dynamic 3D Virtual Worlds
CW '05 Proceedings of the 2005 International Conference on Cyberworlds
An intrinsic reward mechanism for efficient exploration
ICML '06 Proceedings of the 23rd international conference on Machine learning
Motivated reinforcement learning for non-player characters in persistent computer game worlds
Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology
Motivated reinforcement learning for adaptive characters in open-ended simulation games
Proceedings of the international conference on Advances in computer entertainment technology
Intrinsic Motivation Systems for Autonomous Mental Development
IEEE Transactions on Evolutionary Computation
Modelling Behaviour Cycles for Life-Long Learning in Motivated Agents
SEAL '08 Proceedings of the 7th International Conference on Simulated Evolution and Learning
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: users and applications - Volume Part IV
3D interactions between virtual worlds and real life in an e-learning community
Advances in Human-Computer Interaction
Hi-index | 0.00 |
Current computer games are being set in increasingly more complex and dynamic virtual environments. Massively multiplayer online games, for example, are played in persistent virtual worlds, which evolve and change as players create and personalize their own virtual property. In contrast, technologies for controlling the behavior of nonplayer characters that populate virtual game worlds are frequently limited to preprogrammed rules. Characters using fixed rule-sets lack the ability to adapt in time with their environment. Motivated reinforcement learning offers an alternative to character design that can achieve nonplayer characters that both evolve and adapt in dynamic environments. This article presents and evaluates two computational models of motivation for use in nonplayer characters in persistent computer game worlds. These models represent motivation as an ongoing search for novelty, interest, and competence. Two metrics are introduced to evaluate the adaptability of characters controlled by motivated reinforcement learning agents using different models of motivation. These metrics characterize the behavior of nonplayer characters in terms of the variety and complexity of learned behaviors. An empirical evaluation of characters in simulated game scenarios shows that characters motivated by the search for competence are more adaptable in dynamic environments than those motivated by interest and novelty alone.