AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Learning In RoboCup Keepaway Using Evolutionary Algorithms
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Continual Coevolution Through Complexification
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Adaptive game AI with dynamic scripting
Machine Learning
Evolving keepaway soccer players through task decomposition
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartI
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Proceedings of the 9th annual conference companion on Genetic and evolutionary computation
Proceedings of the 10th annual conference companion on Genetic and evolutionary computation
Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Proceedings of the 15th annual conference companion on Genetic and evolutionary computation
Hi-index | 0.00 |
Opponent models are necessary in games where the game state is only partially known to the player, since the player must infer the state of the game based on the opponents actions. This paper presents an architecture and a process for developing neural network game players that utilize explicit opponent models in order to improve game play against unseen opponents. The model is constructed as a mixture over a set of cardinal opponents, i.e. opponents that represent maximally distinct game strategies. The model is trained to estimate the likelihood that the opponent will make the same move as each of the cardinal opponents would in a given game situation. Experiments were performed in the game of Guess It, a simple game of imperfect information that has no optimal strategy for defeating specific opponents. Opponent modeling is therefore crucial to play this game well. Both opponent modeling and game-playing neural networks were trained using NeuroEvolution of Augmenting Topologies (NEAT). The results demonstrate that game-playing provided with the model outperform networks not provided with the model when played against the same previously unseen opponents. The cardinal mixture architecture therefore constitutes a promising approach for general and dynamic opponent modeling in game-playing.