A Method for Registration of 3-D Shapes
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computer Vision
Estimation with Applications to Tracking and Navigation
Estimation with Applications to Tracking and Navigation
Margin based feature selection - theory and algorithms
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Discriminative Learning of Markov Random Fields for Segmentation of 3D Scan Data
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Learning structured prediction models: a large margin approach
Learning structured prediction models: a large margin approach
Neural Computation
Team AnnieWAY's autonomous system for the 2007 DARPA Urban Challenge
Journal of Field Robotics - Special Issue on the 2007 DARPA Urban Challenge, Part II
Driving with tentacles: Integral structures for sensing and motion
Journal of Field Robotics - Special Issue on the 2007 DARPA Urban Challenge, Part II
Hi-index | 0.00 |
This paper describes a LIDAR-based perception system for ground robot mobility, consisting of 3D object detection, classification and tracking. The presented system was demonstrated on-board our autonomous ground vehicle MuCAR-3, enabling it to safely navigate in urban traffic-like scenarios as well as in off-road convoy scenarios. The efficiency of our approach stems from the unique combination of 2D and 3D data processing techniques. Whereas fast segmentation of point clouds into objects is done in a 2 1/2D occupancy grid, classifying the objects is done on raw 3D point clouds. For fast object feature extraction, we advocate the use of statistics of local point cloud properties, captured by histograms over point features. In contrast to most existing work on 3D point cloud classification, where real-time operation is often impossible, this combination allows our system to perform in real-time at 0.1s frame-rate.