Estimating 3-D rigid body transformations: a comparison of four major algorithms
Machine Vision and Applications - Special issue on performance evaluation
Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans
Journal of Intelligent and Robotic Systems
Color Image Segmentation Based on Markov Random Field Clustering for Histological Image Analysis
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 1 - Volume 1
Autonomous Navigation of an Unmanned Ground Vehicle in Unstructured Forest Terrain
LAB-RS '08 Proceedings of the 2008 ECSIS Symposium on Learning and Adaptive Behaviors for Robotic Systems
Pattern Recognition, Fourth Edition
Pattern Recognition, Fourth Edition
Model based vehicle detection and tracking for autonomous urban driving
Autonomous Robots
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
ETFA'09 Proceedings of the 14th IEEE international conference on Emerging technologies & factory automation
Computer Vision and Image Understanding
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Autonomous navigation in unstructured environments is a complex task and an active area of research in mobile robotics. Unlike urban areas with lanes, road signs, and maps, the environment around our robot is unknown and unstructured. Such an environment requires careful examination as it is random, continuous, and the number of perceptions and possible actions are infinite. We describe a terrain classification approach for our autonomous robot based on Markov Random Fields (MRFs ) on fused 3D laser and camera image data. Our primary data structure is a 2D grid whose cells carry information extracted from sensor readings. All cells within the grid are classified and their surface is analyzed in regard to negotiability for wheeled robots. Knowledge of our robot's egomotion allows fusion of previous classification results with current sensor data in order to fill data gaps and regions outside the visibility of the sensors. We estimate egomotion by integrating information of an IMU, GPS measurements, and wheel odometry in an extended Kalman filter. In our experiments we achieve a recall ratio of about 90% for detecting streets and obstacles. We show that our approach is fast enough to be used on autonomous mobile robots in real time.