Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

  • Authors:
  • Martin Persson;Tom Duckett;Achim J. Lilienthal

  • Affiliations:
  • Center for Applied Autonomous Sensor Systems, Department of Technology, Örebro University, Örebro, Sweden;Department of Computing and Informatics, University of Lincoln, Lincoln, UK;Center for Applied Autonomous Sensor Systems, Department of Technology, Örebro University, Örebro, Sweden

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.