Scene understanding through autonomous interactive perception

  • Authors:
  • Niklas Bergström;Carl Henrik Ek;Mårten Björkman;Danica Kragic

  • Affiliations:
  • Computer Vision and Active Perception Laboratory, Royal Institute of Technology (KTH), Stockholm, Sweden;Computer Vision and Active Perception Laboratory, Royal Institute of Technology (KTH), Stockholm, Sweden;Computer Vision and Active Perception Laboratory, Royal Institute of Technology (KTH), Stockholm, Sweden;Computer Vision and Active Perception Laboratory, Royal Institute of Technology (KTH), Stockholm, Sweden

  • Venue:
  • ICVS'11 Proceedings of the 8th international conference on Computer vision systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a framework for detecting, extracting and modeling objects in natural scenes from multi-modal data. Our framework is iterative, exploiting different hypotheses in a complementary manner. We employ the framework in realistic scenarios, based on visual appearance and depth information. Using a robotic manipulator that interacts with the scene, object hypotheses generated using appearance information are confirmed through pushing. The framework is iterative, each generated hypothesis is feeding into the subsequent one, continuously refining the predictions about the scene. We show results that demonstrate the synergic effect of applying multiple hypotheses for real-world scene understanding. The method is efficient and performs in real-time.