Practical Issues in Temporal Difference Learning
Machine Learning
The complexity of stochastic games
Information and Computation
Machine Learning
Temporal difference learning and TD-Gammon
Communications of the ACM
On verifying game designs and playing strategies using reinforcement learning
Proceedings of the 2001 ACM symposium on Applied computing
Programming backgammon using self-teaching neural nets
Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence
Reinforcement Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Interactive Verification of Game Design and Playing Strategies
ICTAI '02 Proceedings of the 14th IEEE International Conference on Tools with Artificial Intelligence
Learning and applying competitive strategies
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Ensemble pruning using reinforcement learning
SETN'06 Proceedings of the 4th Helenic conference on Advances in Artificial Intelligence
Proceedings of the 2008 conference on Knowledge-Based Software Engineering: Proceedings of the Eighth Joint Conference on Knowledge-Based Software Engineering
Time does not always buy quality in co-evolutionary learning
SETN'10 Proceedings of the 6th Hellenic conference on Artificial Intelligence: theories, models and applications
Hi-index | 0.00 |
In this article we experiment with a 2-player strategy board game where playing models are developed using reinforcement learning and neural networks. The models are developed to speed up automatic game development based on human involvement at varying levels of sophistication and density when compared to fully autonomous playing. The experimental results suggest a clear and measurable association between the ability to win games and the ability to do that fast, while at the same time demonstrating that there is a minimum level of human involvement beyond which no learning really occurs.