Using multi-modal 3D contours and their relations for vision and robotics

  • Authors:
  • Emre Başeski;Nicolas Pugeault;Sinan Kalkan;Leon Bodenhagen;Justus H. Piater;Norbert Krüger

  • Affiliations:
  • The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Odense, Denmark;Center for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom;Department of Computer Engineering, Middle East Technical University, Ankara, Turkey;The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Odense, Denmark;Department of Electrical Engineering and Computer Science, University of Liège, Liège, Belgium;The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Odense, Denmark

  • Venue:
  • Journal of Visual Communication and Image Representation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work, we make use of 3D contours and relations between them (namely, coplanarity, cocolority, distance and angle) for four different applications in the area of computer vision and vision-based robotics. Our multi-modal contour representation covers both geometric and appearance information. We show the potential of reasoning with global entities in the context of visual scene analysis for driver assistance, depth prediction, robotic grasping and grasp learning. We argue that, such 3D global reasoning processes complement widely-used 2D local approaches such as bag-of-features since 3D relations are invariant under camera transformations and 3D information can be directly linked to actions. We therefore stress the necessity of including both global and local features with different spatial dimensions within a representation. We also discuss the importance of an efficient use of the uncertainty associated with the features, relations, and their applicability in a given context.