Simultaneous egomotion estimation, segmentation, and moving object detection

  • Authors:
  • Shao-Wen Yang;Chieh-Chih Wang

  • Affiliations:
  • Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10607, Taiwan;Department of Computer Science and Information Engineering, Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei 10607, Taiwan

  • Venue:
  • Journal of Field Robotics
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robust egomotion estimation is a key prerequisite for making a robot truly autonomous. In previous work, a multimodel extension of random sample consensus (RANSAC) was introduced to deal with environments with rapid changes by incorporating moving object information. A multiscale matching algorithm was also proposed to resolve the issue of imperfect segmentation. In this paper, we present a novel specialization of RANSAC that extends the previous work. A unified framework is introduced to achieve simultaneously egomotion estimation, multiscale segmentation, and moving object detection in the RANSAC paradigm. The motivation of this work is to provide a robust real-time solution to the problem of egomotion estimation, segmentation, and moving object detection in highly dynamic environments. The idea is to augment the discriminative power of spatial and temporal appearances of objects by the spatiotemporal consistency. The objective is twofold. First, split mismerged segments and distinguish nonstationary objects from stationary objects by the spatial consistency. Second, merge oversegmented segments and differentiate moving objects from outlying objects by the temporal consistency. Moving objects of considerably different sizes, from pedestrians to trucks, can be properly segmented and correctly detected. We also show that the performance of egomotion estimation can be further improved by taking into account both stationary and moving object information. Our approach is extensively evaluated on challenging data sets and compared to the state of the art. The experiments also show that our approach serves as a general framework that works well with various planar range data. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.