New Visual Invariants for Terrain Navigation Without 3DReconstruction

  • Authors:
  • Gin-Shu Young;Martin Herman;Tsai-Hong Hong;David Jiang;Jackson C. S. Yang

  • Affiliations:
  • National Institute of Standards and Technology (NIST), Bldg. 220, Rm. B124, Gaithersburg, MD 20899/ and Robotics Laboratory, Department of Mechanical Engineering, University of Maryland, College P ...;National Institute of Standards and Technology (NIST), Bldg. 220, Rm. B124, Gaithersburg, MD 20899;National Institute of Standards and Technology (NIST), Bldg. 220, Rm. B124, Gaithersburg, MD 20899;National Institute of Standards and Technology (NIST), Bldg. 220, Rm. B124, Gaithersburg, MD 20899/ and Robotics Laboratory, Department of Mechanical Engineering, University of Maryland, College P ...;Robotics Laboratory, Department of Mechanical Engineering, University of Maryland, College Park, MD 20742

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

For autonomous vehicles to achieve terrain navigation, obstaclesmust be discriminated from terrain before any path planning andobstacle avoidance activity is undertaken. In this paper, anovel approach to obstacle detection has been developed. Themethod finds obstacles in the 2D image space, as opposed to 3Dreconstructed space, using optical flow. Our method assumes thatboth nonobstacle terrain regions, as well as regions withobstacles, will be visible in the imagery. Therefore, our goalis to discriminate between terrain regions with obstacles andterrain regions without obstacles. Our method uses new visuallinear invariants based on optical flow. Employing the linearinvariance property, obstacles can be directly detected by usingreference flow lines obtained from measured optical flow. Themain features of this approach are: (1) 2D visual information(i.e., optical flow) is directly used to detect obstacles; norange, 3D motion, or 3D scene geometry is recovered; (2)knowledge about the camera-to-ground coordinate transformationis not required; (3) knowledge about vehicle (or camera) motionis not required; (4) the method is valid for the vehicle (orcamera) undergoing general six-degree-of-freedom motion; (5) theerror sources involved are reduced to a minimum, because theonly information required is one component of optical flow.Numerous experiments using both synthetic and real image dataare presented. Our methods are demonstrated in both ground andair vehicle scenarios.