Automatic creation of an autonomous agent: genetic evolution of a neural-network driven robot
SAB94 Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3: from animals to animats 3
Learning and evolution in neural networks
Adaptive Behavior
Incremental evolution of complex general behavior
Adaptive Behavior - Special issue on environment structure and behavior
Evolving Neural Networks to Play Go
Applied Intelligence
Evolving non-Trivial Behaviors on Real Robots: an Autonomous Robot that Picks up Objects
AI*IA '95 Proceedings of the 4th Congress of the Italian Association for Artificial Intelligence on Topics in Artificial Intelligence
Symbiotic Evolution of Neural Networks in Sequential Decision Tasks
Symbiotic Evolution of Neural Networks in Sequential Decision Tasks
Coordinating multi-rover systems: evaluation functions for dynamic and noisy environments
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Engineering industry controllers using neuroevolution
Artificial Intelligence for Engineering Design, Analysis and Manufacturing
Distributed evaluation functions for fault tolerant multi-rover systems
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Efficient evaluation functions for evolving coordination
Evolutionary Computation
Genetic team composition and level of selection in the evolution of cooperation
IEEE Transactions on Evolutionary Computation
A neuro-evolutionary approach to micro aerial vehicle control
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Efficient reward functions for adaptive multi-rover systems
LAMAS'05 Proceedings of the First international conference on Learning and Adaption in Multi-Agent Systems
Hi-index | 0.00 |
In standard neuro-evolution, a population of networks is evolved in a task, and the network that best solves the task is found. This network is then fixed and used to solve future instances of the problem. Networks evolved in this way do not handle real-time interaction very well. It is hard to evolve a solution ahead of time that can cope effectively with all the possible environments that might arise in the future and with all the possible ways someone may interact with it. This paper proposes evolving feedforward neural networks online to create agents that improve their performance through real-time interaction. This approach is demonstrated in a game world where neural-network-controlled individuals play against humans. Through evolution, these individuals learn to react to varying opponents while appropriately taking into account conflicting goals. After initial evaluation offline, the population is allowed to evolve online, and its performance improves considerably. The population not only adapts to novel situations brought about by changing strategies in the opponent and the game layout, but it also improves its performance in situations that it has already seen in offline training. This paper will describe an implementation of online evolution and shows that it is a practical method that exceeds the performance of offline evolution alone.