Real-time obstacle avoidance for manipulators and mobile robots
International Journal of Robotics Research
Dynamic Motion Planning for Mobile Robots Using Potential Field Method
Autonomous Robots
Practical Reinforcement Learning in Continuous Spaces
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A genetic-fuzzy approach for mobile robot navigation among moving obstacles
International Journal of Approximate Reasoning
Fuzzy temporal rules for mobile robot guidance in dynamicenvironments
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Reactive navigation in dynamic environment using a multisensorpredictor
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Obstacle avoidance in a dynamic environment: a collision cone approach
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Cooperative strategy based on adaptive Q-learning for robot soccer systems
IEEE Transactions on Fuzzy Systems
Map-based navigation in mobile robots
Cognitive Systems Research
Map-based navigation in mobile robots
Cognitive Systems Research
Reinforcement learning algorithms with function approximation: Recent advances and applications
Information Sciences: an International Journal
Mobile robot path planning using polyclonal-based artificial immune network
Journal of Control Science and Engineering - Special issue on Advances in Methods for Networked and Cyber-Physical System
Hi-index | 0.00 |
In this paper, a new approach is developed for solving the problem of mobile robot path planning in an unknown dynamic environment based on Q-learning. Q-learning algorithms have been used widely for solving real world problems, especially in robotics since it has been proved to give reliable and efficient solutions due to its simple and well developed theory. However, most of the researchers who tried to use Q-learning for solving the mobile robot navigation problem dealt with static environments; they avoided using it for dynamic environments because it is a more complex problem that has infinite number of states. This great number of states makes the training for the intelligent agent very difficult. In this paper, the Q-learning algorithm was applied for solving the mobile robot navigation in dynamic environment problem by limiting the number of states based on a new definition for the states space. This has the effect of reducing the size of the Q-table and hence, increasing the speed of the navigation algorithm. The conducted experimental simulation scenarios indicate the strength of the new proposed approach for mobile robot navigation in dynamic environment. The results show that the new approach has a high Hit rate and that the robot succeeded to reach its target in a collision free path in most cases which is the most desirable feature in any navigation algorithm.