A featureless and stochastic approach to on-board stereo vision system pose
Image and Vision Computing
Real-time hierarchical outdoor SLAM based on stereovision and GPS fusion
IEEE Transactions on Intelligent Transportation Systems
Stereo Vision Based Floor Plane Extraction and Camera Pose Estimation
ICIRA '09 Proceedings of the 2nd International Conference on Intelligent Robotics and Applications
Vision-IMU integration using a slow-frame-rate monocular vision system in an actual roadway setting
IEEE Transactions on Intelligent Transportation Systems
A channel awareness vehicle detector
IEEE Transactions on Intelligent Transportation Systems
On-Board monocular vision system pose estimation through a dense optical flow
ICIAR'10 Proceedings of the 7th international conference on Image Analysis and Recognition - Volume Part I
Hi-index | 0.00 |
This paper presents an efficient technique for estimating the pose of an onboard stereo vision system relative to the environment's dominant surface area, which is supposed to be the road surface. Unlike previous approaches, it can be used either for urban or highway scenarios since it is not based on a specific visual traffic feature extraction but on 3D raw data points. The whole process is performed in the Euclidean space and consists of two stages. Initially, a compact 2D representation of the original 3D data points is computed. Then, a RANdom SAmple Consensus (RANSAC) based least-squares approach is used to fit a plane to the road. Fast RANSAC fitting is obtained by selecting points according to a probability function that takes into account the density of points at a given depth. Finally, stereo camera height and pitch angle are computed related to the fitted road plane. The proposed technique is intended to be used in driver-assistance systems for applications such as vehicle or pedestrian detection. Experimental results on urban environments, which are the most challenging scenarios (i.e., flat/uphill/downhill driving, speed bumps, and car's accelerations), are presented. These results are validated with manually annotated ground truth. Additionally, comparisons with previous works are presented to show the improvements in the central processing unit processing time, as well as in the accuracy of the obtained results.