Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Feature Extraction, Construction and Selection: A Data Mining Perspective
Feature Extraction, Construction and Selection: A Data Mining Perspective
Efficient Reinforcement Learning Through Evolving Neural Network Topologies
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
An introduction to variable and feature selection
The Journal of Machine Learning Research
Toward Integrating Feature Selection Algorithms for Classification and Clustering
IEEE Transactions on Knowledge and Data Engineering
Automatic feature selection in neuroevolution
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evolutionary Function Approximation for Reinforcement Learning
The Journal of Machine Learning Research
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Solving non-Markovian control tasks with neuroevolution
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
Artificial Intelligence in Medicine
Hi-index | 0.00 |
Reinforcement learning (RL) is designed to learn optimal control policies from unsupervised interactions with the environment. Many successful RL algorithms have been developed, however, none of them can efficiently tackle problems with high-dimensional state spaces due to the "curse of dimensionality," and so their applicability to real-world scenarios is limited. Here we propose a Sample Aware Feature Selection algorithm embedded in NEAT, or SAFS-NEAT, to help address this challenge. This algorithm builds upon the powerful evolutionary policy search algorithm NEAT, by exploiting data samples collected during the learning process. This data permits feature selection techniques from the supervised learning domain to be used to help RL scale to problems with high-dimensional state spaces. We show that by exploiting previously observed samples, on-line feature selection can enable NEAT to learn near optimal policies for such problems, and also outperform an existing feature selection algorithm which does not explicitly make use of this available data.