The Kalman filter: an introduction to concepts
Autonomous robot vehicles
K-d trees for semidynamic point sets
SCG '90 Proceedings of the sixth annual symposium on Computational geometry
Reinforcement learning with replacing eligibility traces
Machine Learning - Special issue on reinforcement learning
Neural networks: a systematic introduction
Neural networks: a systematic introduction
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
FastSLAM: a factored solution to the simultaneous localization and mapping problem
Eighteenth national conference on Artificial intelligence
Marker Tracking and HMD Calibration for a Video-Based Augmented Reality Conferencing System
IWAR '99 Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Nearest-Neighbor Methods in Learning and Vision: Theory and Practice (Neural Information Processing)
Nearest-Neighbor Methods in Learning and Vision: Theory and Practice (Neural Information Processing)
Landmark Selection for Vision-Based Navigation
IEEE Transactions on Robotics
Landmark Selection for Task-Oriented Navigation
IEEE Transactions on Robotics
Learning efficient policies for vision-based navigation
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Efficient vision-based navigation
Autonomous Robots
A Bayesian approach for place recognition
Robotics and Autonomous Systems
Effective landmark placement for accurate and reliable mobile robot navigation
Robotics and Autonomous Systems
Hi-index | 0.00 |
In general, a mobile robot that operates in unknown environments has to maintain a map and has to determine its own location given the map. This introduces significant computational and memory constraints for most autonomous systems, especially for lightweight robots such as humanoids or flying vehicles. In this paper, we present a novel approach for learning a landmark selection policy that allows a robot to discard landmarks that are not valuable for its current navigation task. This enables the robot to reduce the computational burden and to carry out its task more efficiently by maintaining only the important landmarks. Our approach applies an unscented Kalman filter for addressing the simultaneous localization and mapping problems and uses Monte-Carlo reinforcement learning to obtain the selection policy. Based on real world and simulation experiments, we show that the learned policies allow for efficient robot navigation and outperform handcrafted strategies. We furthermore demonstrate that the learned policies are not only usable in a specific scenario but can also be generalized towards environments with varying properties.