Video-Based Driver Assistance—From Basic Functions to Applications
International Journal of Computer Vision
A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion
ICVS '06 Proceedings of the Fourth IEEE International Conference on Computer Vision Systems
Turn-Intent Analysis Using Body Pose for Intelligent Driver Assistance
IEEE Pervasive Computing
Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation
IEEE Transactions on Intelligent Transportation Systems
Lane Change Intent Analysis Using Robust Operators and Sparse Bayesian Learning
IEEE Transactions on Intelligent Transportation Systems
GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection
IEEE Transactions on Image Processing
Hi-index | 0.00 |
With a panoramic view of the scene, a single omnidirectional camera can monitor the 360-degree surround of the vehicle or monitor the interior and exterior of the vehicle at the same time. We investigate problems associated with integrating driver assistance functionalities that have been designed for rectilinear cameras with a single omnidirectional camera instead. Specifically, omnidirectional cameras have been shown effective in determining head gaze orientation from within a vehicle. We examine the issues involved in integrating lane tracking functions using the same omnidirectional camera, which provide a view of both the driver and the road ahead of the vehicle. We present analysis on the impact of the omnidirectional camera's reduced image resolution on lane tracking accuracy, as a consequence of gaining the expansive view. And to do so, we present Omni-VioLET, a modified implementation of the vision-based lane estimation and tracking system (VioLET), and conduct a systematic performance evaluation of both lane-trackers operating on monocular rectilinear images and omnidirectional images. We are able to show a performance comparison of the lane tracking from Omni-VioLET and Recti-VioLET with ground truth using images captured along the same freeway road in a specified course. The results are surprising: with 1/10th the number of pixels representing the same space and about 1/3rd the horizontal image resolution as a rectilinear image of the same road, the omnidirectional camera implementation results in only three times the amount the mean absolute error in tracking the left lane boundary position.