CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Towards Active Machine-Vision-Based Driver Assistance for Urban Areas
International Journal of Computer Vision
Object Localization by Bayesian Correlation
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Journal of Algorithms
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
A mapping and localization framework for scalable appearance-based navigation
Computer Vision and Image Understanding
Initialisation for Visual Tracking in Urban Environments
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
GPS multipath mitigation for urban area using omnidirectional infrared camera
IEEE Transactions on Intelligent Transportation Systems
A low-cost solution for an integrated multisensor lane departure warning system
IEEE Transactions on Intelligent Transportation Systems
Multiple Condensation filters for road detection and tracking
Pattern Analysis & Applications
Modular neural networks for map-matched GPS positioning
WISEW'03 Proceedings of the Fourth international conference on Web information systems engineering workshops
High-Integrity IMM-EKF-Based Road Vehicle Navigation With Low-Cost GPS/SBAS/INS
IEEE Transactions on Intelligent Transportation Systems
Robust Road Modeling and Tracking Using Condensation
IEEE Transactions on Intelligent Transportation Systems
Hi-index | 0.00 |
This paper presents a sensor-fusion framework for video-based navigation. Video-based navigation offers the advantages over existing approaches. With this type of navigation, road signs are directly superimposed onto the video of the road scene, as opposed to those superimposed onto a 2-D map, as is the case with conventional navigation systems. Drivers can then follow the virtual signs in the video to travel to the destination. The challenges of video-based navigation require the use of multiple sensors. The sensor-fusion framework that we propose has two major components: 1) a computer vision module for accurately detecting and tracking the road by using partition sampling and auxiliary variables and 2) a sensor-fusion module using multiple particle filters to integrate vision, Global Positioning Systems (GPSs), and Geographical Information Systems (GISs). GPS and GIS provide prior knowledge about the road for the vision module, and the vision module, in turn, corrects GPS errors.