Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
ML92 Proceedings of the ninth international workshop on Machine learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Genetic Reinforcement Learning for Neurocontrol Problems
Machine Learning - Special issue on genetic algorithms
Learning procedural knowledge through observation
Proceedings of the 1st international conference on Knowledge capture
Algorithms for Inverse Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Solving Non-Markovian Control Tasks with Neuro-Evolution
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
A Framework for Behavioural Cloning
Machine Intelligence 15, Intelligent Agents [St. Catherine's College, Oxford, July 1995]
A framework for learning from demonstration, generalization and practice in human-robot domains
A framework for learning from demonstration, generalization and practice in human-robot domains
Robust non-linear control through neuroevolution
Robust non-linear control through neuroevolution
Evolving visibly intelligent behavior for embedded game agents
Evolving visibly intelligent behavior for embedded game agents
A study of the Lamarckian evolution of recurrent neural networks
IEEE Transactions on Evolutionary Computation
Real-time neuroevolution in the NERO video game
IEEE Transactions on Evolutionary Computation
Proceedings of the 9th annual conference companion on Genetic and evolutionary computation
Proceedings of the 10th annual conference companion on Genetic and evolutionary computation
Neuro-evolution for a gathering and collective construction task
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Imitation-based evolution of artificial game players
ACM SIGEVOlution
Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers
Machine learning in digital games: a survey
Artificial Intelligence Review
Robust player imitation using multiobjective evolution
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Lamarckian neuroevolution for visual control in the quake II environment
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Backpropagation without human supervision for visual control in quake II
CIG'09 Proceedings of the 5th international conference on Computational Intelligence and Games
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Proceedings of the 15th annual conference companion on Genetic and evolutionary computation
Hi-index | 0.00 |
Much of artificial intelligence research is focused on devising optimal solutions for challenging and well-defined but highly constrained problems. However, as we begin creating autonomous agents to operate in the rich environments of modern videogames and computer simulations, it becomes important to devise agent behaviors that display the visible attributes of intelligence, rather than simply performing optimally. Such visibly intelligent behavior is difficult to specify with rules or characterize in terms of quantifiable objective functions, but it is possible to utilize human intuitions to directly guide a learning system toward the desired sorts of behavior. Policy induction from human-generated examples is a promising approach to training such agents. In this paper, such a method is developed and tested using Lamarckian neuroevolution. Artificial neural networks are evolved to control autonomous agents in a strategy game. The evolution is guided by human-generated examples of play, and the system effectively learns the policies that were used by the player to generate the examples. I.e., the agents learn visibly intelligent behavior. In the future, such methods are likely to play a central rule in creating autonomous agents for complex environments, making it possible to generate rich behaviors derived from nothing more formal than the intuitively generated example, of designers, players, or subject-matter experts.