The cascade-correlation learning architecture
Advances in neural information processing systems 2
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Neural Computation
Co-evolving recurrent neurons learn deep memory POMDPs
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Learning to Forget: Continual Prediction with LSTM
Neural Computation
Autonomous shaping: knowledge transfer in reinforcement learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Training Recurrent Networks by Evolino
Neural Computation
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Efficient non-linear control through neuroevolution
ECML'06 Proceedings of the 17th European conference on Machine Learning
Hi-index | 0.00 |
Recurrent Neural Networks (RNNs) have been shown to have a strong ability to solve some hard problems. Learning time for these problems from scratch is typically very long. For supervised learning, several methods have been proposed to reuse existing knowledge in previous similar tasks. However, for unsupervised learning such as Reinforcement Learning (RL), especially for Partially Observable Markov Decision Processes (POMDPs), it is difficult to apply directly these algorithms. This paper presents several methods which have the potential of transferring of knowledge in RL using RNN: Directed Transfer, Cascade-Correlation, Mixture of Expert Systems, and Two-Level Architecture. Preliminary results of experiments in the E maze domain show the potential of these methods. Knowledge based learning time for a new problem is much shorter learning time from scratch even thought the new task looks very different from the previous tasks.