A Theory of Fun for Game Design
A Theory of Fun for Game Design
Scalable Neural Networks for Board Games
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
NEAT in HyperNEAT substituted with genetic programming
ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
LSTM recurrent networks learn simple context-free and context-sensitive languages
IEEE Transactions on Neural Networks
Evolving behaviour trees for the Mario AI competition using grammatical evolution
EvoApplications'11 Proceedings of the 2011 international conference on Applications of evolutionary computation - Volume Part I
Neuroevolution with manifold learning for playing Mario
International Journal of Bio-Inspired Computation
An inclusive view of player modeling
Proceedings of the 6th International Conference on Foundations of Digital Games
Dealing with noisy fitness in the design of a RTS game bot
EvoApplications'12 Proceedings of the 2012t European conference on Applications of Evolutionary Computation
Digging deeper into platform game level design: session size and sequential features
EvoApplications'12 Proceedings of the 2012t European conference on Applications of Evolutionary Computation
Genetic programming needs better benchmarks
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Evolving the strategies of agents for the ANTS game
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advences in computational intelligence - Volume Part II
Hi-index | 0.00 |
We introduce a new reinforcement learning benchmark based on the classic platform game Super Mario Bros. The benchmark has a high-dimensional input space, and achieving a good score requires sophisticated and varied strategies. However, it has tunable difficulty, and at the lowest difficulty setting decent score can be achieved using rudimentary strategies and a small fraction of the input space. To investigate the properties of the benchmark, we evolve neural network-based controllers using different network architectures and input spaces. We show that it is relatively easy to learn basic strategies capable of clearing individual levels of low difficulty, but that these controllers have problems with generalization to unseen levels and with taking larger parts of the input space into account. A number of directions worth exploring for learning better-performing strategies are discussed.