Fusion of 3D-LIDAR and camera data for scene parsing

  • Authors:
  • Gangqiang Zhao;Xuhong Xiao;Junsong Yuan;Gee Wah Ng

  • Affiliations:
  • -;-;-;-

  • Venue:
  • Journal of Visual Communication and Image Representation
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene parsing and data fusion for a 3D-LIDAR scanner (Velodyne HDL-64E) and a video camera is described. First of all, a geometry segmentation algorithm is proposed for detection of obstacles and ground areas from data collected by the Velodyne scanner. Then, corresponding image collected by the video camera is classified patch by patch into more detailed categories. After that, parsing result of each frame is obtained by fusing result of Velodyne data and that of image using the fuzzy logic inference framework. Finally, parsing results of consecutive frames are smoothed by the Markov random field based temporal fusion method. The proposed approach has been evaluated with datasets collected by our autonomous ground vehicle testbed in both rural and urban areas. The fused results are more reliable than that acquired via analysis of only images or Velodyne data.