Linear least-squares algorithms for temporal difference learning
Machine Learning - Special issue on reinforcement learning
Reinforcement learning with replacing eligibility traces
Machine Learning - Special issue on reinforcement learning
Co-Evolution in the Successful Learning of Backgammon Strategy
Machine Learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Technical Update: Least-Squares Temporal Difference Learning
Machine Learning
Evolving neural networks through augmenting topologies
Evolutionary Computation
Introduction to Evolutionary Computing
Introduction to Evolutionary Computing
Least-squares policy iteration
The Journal of Machine Learning Research
Efficient evolution of neural networks through complexification
Efficient evolution of neural networks through complexification
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Evolutionary Function Approximation for Reinforcement Learning
The Journal of Machine Learning Research
New methods for competitive coevolution
Evolutionary Computation
Accelerated Neural Evolution through Cooperatively Coevolved Synapses
The Journal of Machine Learning Research
Autonomous transfer for reinforcement learning
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Supervised and Evolutionary Learning of Echo State Networks
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
A comparison between cellular encoding and direct encoding for genetic neural networks
GECCO '96 Proceedings of the 1st annual conference on Genetic and evolutionary computation
Incremental least-squares temporal difference learning
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Efficient reinforcement learning using recursive least-squares methods
Journal of Artificial Intelligence Research
Competitive coevolution through evolutionary complexification
Journal of Artificial Intelligence Research
Learning and multiagent reasoning for autonomous agents
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Transfer Learning for Reinforcement Learning Domains: A Survey
The Journal of Machine Learning Research
RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments
The Journal of Machine Learning Research
Autonomous Agents and Multi-Agent Systems
PEGASUS: a policy search method for large MDPs and POMDPs
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Efficient non-linear control through neuroevolution
ECML'06 Proceedings of the 17th European conference on Machine Learning
Reinforcement learning with echo state networks
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
Survey: Reservoir computing approaches to recurrent neural network training
Computer Science Review
Mimicking human neuronal pathways in silico: an emergent model on the effective connectivity
Journal of Computational Neuroscience
Hi-index | 0.01 |
The development of real-world, fully autonomous agents would require mechanisms that would offer generalization capabilities from experience, suitable for a large range of machine learning tasks, like those from the areas of supervised and reinforcement learning. Such capacities could be offered by parametric function approximators that could either model the environment or the agent's policy. To promote autonomy, these structures should be adapted to the problem at hand with no or little human expert input. Towards this goal, we propose an adaptive function approximator method for developing appropriate neural networks in the form of reservoir computing systems through evolution and learning. Our neuro-evolution of augmenting reservoirs approach comprises of several ideas, successful on their own, in an effort to develop an algorithm that could handle a large range of problems, more efficiently. In particular, we use the neuro-evolution of augmented topologies algorithm as a meta-search method for the adaptation of echo state networks for handling problems to be encountered by autonomous entities. We test our approach on several test-beds from the realms of time series prediction and reinforcement learning. We compare our methodology against similar state-of-the-art algorithms with promising results.