VITS-A Vision System for Autonomous Land Vehicle Navigation
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special Issue on Industrial Machine Vision and Computer Vision Technology:8MPart
Representation space: an approach to the integration of visual information
Proceedings of a workshop on Image understanding workshop
Qualitative target motion detection and tracking
Proceedings of a workshop on Image understanding workshop
Modeling rugged terrain by mobile robots with multiple sensors
Modeling rugged terrain by mobile robots with multiple sensors
Finding road lane boundaries for vision-guided vehicle navigation
Vision-based vehicle guidance
ACM '86 Proceedings of 1986 ACM Fall joint computer conference
Parallel Processing in the DARPA Strategic Computing Vision Program
IEEE Expert: Intelligent Systems and Their Applications
International Journal of Computer Vision
Recent progress in road and lane detection: a survey
Machine Vision and Applications
Hi-index | 0.00 |
The Navlab project, which seeks to build an autonomous robot that can operate in a realistic environment with bad weather, bad lighting, and bad or changing roads, is discussed. The perception techniques developed for the Navlab include road-following techniques using color classification and neural nets. These are discussed with reference to three road-following systems, SCARF, YARF, and ALVINN. Three-dimensional perception using three types of terrain representation (obstacle maps, terrain feature maps, and high-resolution maps) is examined. It is noted that perception continues to be an obstacle in developing autonomous vehicles. This work is part of the Defense Advanced Research Project Agency. Strategic Computing Initiative.