Adaptive internal state space construction method for reinforcement learning of a real-world agent
Neural Networks - Special issue on organisation of computation in brain-like systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Applications of the self-organising map to reinforcement learning
Neural Networks - New developments in self-organizing maps
Cooperative Q Learning Based on Blackboard Architecture
CISW '07 Proceedings of the 2007 International Conference on Computational Intelligence and Security Workshops
Real-world reinforcement learning for autonomous humanoid robot charging in a home environment
TAROS'11 Proceedings of the 12th Annual conference on Towards autonomous robotic systems
Expertness based cooperative Q-learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Reinforcement learning to adjust robot movements to new situations
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
Much research has been conducted on the application of reinforcement learning to robots. Learning time is a matter of concern in reinforcement learning. In reinforcement learning, information from sensors is projected on to a state space. A robot learns the correspondence between each state and action in state space and determines the best correspondence. When the state space is expanded according to the number of sensors, the number of correspondences learnt by the robot is increased. Therefore, learning the best correspondence becomes time consuming. In this study, we focus on the importance of sensors for a robot to perform a particular task. The sensors that are applicable to a task differ for different tasks. A robot does not need to use all installed sensors to perform a task. The state space should consist of only those sensors that are essential to a task. Using such a state space consisting of only important sensors, a robot can learn correspondences faster than in the case of a state space consisting of all installed sensors. Therefore, in this paper, we propose a relatively fast learning system in which a robot can autonomously select those sensors that are essential to a task and a state space for only such important sensors is constructed. We define the measure of importance of a sensor for a task. The measure is the coefficient of correlation between the value of each sensor and reward in reinforcement learning. A robot determines the importance of sensors based on this correlation. Consequently, the state space is reduced based on the importance of sensors. Thus, the robot can efficiently learn correspondences owing to the reduced state space. We confirm the effectiveness of our proposed system through a simulation.