Semantic fusion of laser and vision in pedestrian detection

  • Authors:
  • Luciano Oliveira;Urbano Nunes;Paulo Peixoto;Marco Silva;Fernando Moita

  • Affiliations:
  • Institute of Systems and Robotics, University of Coimbra, Pinhal de Marrocos, Polo II, Coimbra, Portugal;Institute of Systems and Robotics, University of Coimbra, Pinhal de Marrocos, Polo II, Coimbra, Portugal;Institute of Systems and Robotics, University of Coimbra, Pinhal de Marrocos, Polo II, Coimbra, Portugal;Institute of Systems and Robotics, University of Coimbra, Pinhal de Marrocos, Polo II, Coimbra, Portugal;Institute of Systems and Robotics, University of Coimbra, Pinhal de Marrocos, Polo II, Coimbra, Portugal

  • Venue:
  • Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Fusion of laser and vision in object detection has been accomplished by two main approaches: (1) independent integration of sensor-driven features or sensor-driven classifiers, or (2) a region of interest (ROI) is found by laser segmentation and an image classifier is used to name the projected ROI. Here, we propose a novel fusion approach based on semantic information, and embodied on many levels. Sensor fusion is based on spatial relationship of parts-based classifiers, being performed via a Markov logic network. The proposed system deals with partial segments, it is able to recover depth information even if the laser fails, and the integration is modeled through contextual information-characteristics not found on previous approaches. Experiments in pedestrian detection demonstrate the effectiveness of our method over data sets gathered in urban scenarios.