Communication in reactive multiagent robotic systems
Autonomous Robots
Theory for coordinating concurrent hierarchical planning agents using summary information
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Evolving neural networks through augmenting topologies
Evolutionary Computation
Maximizing Reward in a Non-Stationary Mobile Robot Environment
Autonomous Agents and Multi-Agent Systems
Monte Carlo Localization with Mixture Proposal Distribution
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Performance bounds for planning in unknown terrain
Artificial Intelligence - special issue on planning with uncertainty and incomplete information
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Coordinating multi-rover systems: evaluation functions for dynamic and noisy environments
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Automatic feature selection in neuroevolution
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Efficient credit assignment through evaluation function decomposition
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Robotics: Science and Systems I
Robotics: Science and Systems I
Stanley: The robot that won the DARPA Grand Challenge: Research Articles
Journal of Robotic Systems - Special Issue on the DARPA Grand Challenge, Part 2
Evolving distributed agents for managing air traffic
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Forming neural networks through efficient and adaptive coevolution
Evolutionary Computation
Neuro-evolution for a gathering and collective construction task
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Efficient evaluation functions for evolving coordination
Evolutionary Computation
Little Ben: The Ben Franklin Racing Team's entry in the 2007 DARPA Urban Challenge
Journal of Field Robotics - Special Issue on the 2007 DARPA Urban Challenge, Part II
Behavioral control through evolutionary neurocontrollers for autonomous mobile robot navigation
Robotics and Autonomous Systems
Direct Policy Search Reinforcement Learning for Robot Control
Proceedings of the 2005 conference on Artificial Intelligence Research and Development
Reinforcement learning for vulnerability assessment in peer-to-peer networks
IAAI'08 Proceedings of the 20th national conference on Innovative applications of artificial intelligence - Volume 3
Adaptive control for autonomous underwater vehicles
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Solving non-Markovian control tasks with neuroevolution
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Active guidance for a finless rocket using neuroevolution
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartII
Fast replanning for navigation in unknown terrain
IEEE Transactions on Robotics
Methods and algorithms for motion control of walking mobile robot with obstacle avoidance
ECC'11 Proceedings of the 5th European conference on European computing conference
Walking robot method control using artificial vision
Proceedings of the 15th WSEAS international conference on Systems
Policy transfer in mobile robots using neuro-evolutionary navigation
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
Hardware opposition-based PSO applied to mobile robot controllers
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
In many robotic exploration missions, robots have to learn specific policies that allow them to: (i) select high level goals (e.g., identify specific destinations), (ii) navigate (reach those destinations), (iii) and adapt to their environment (e.g., modify their behavior based on changing environmental conditions). Furthermore, those policies must be robust to signal noise or unexpected situations, scalable to more complex environments, and account for the physical limitations of the robots (e.g., limited battery power and computational power). In this paper we evaluate reactive and learning navigation algorithms for exploration robots that must avoid obstacles and reach specific destinations in limited time and with limited observations. Our results show that neuro-evolutionary algorithms with well-designed evaluation functions can produce up to 50% better performance than reactive algorithms in complex domains where the robot's goals are to select paths that lead to seek specific destinations while avoiding obstacles, particularly when facing significant sensor and actuator signal noise.