Learning to place new objects in a scene

  • Authors:
  • Yun Jiang;Marcus Lim;Changxi Zheng;Ashutosh Saxena

  • Affiliations:
  • Computer Science Department, Cornell University, USA;Computer Science Department, Cornell University, USA;Computer Science Department, Cornell University, USA;Computer Science Department, Cornell University, USA

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Placing is a necessary skill for a personal robot to have in order to perform tasks such as arranging objects in a disorganized room. The object placements should not only be stable but also be in their semantically preferred placing areas and orientations. This is challenging because an environment can have a large variety of objects and placing areas that may not have been seen by the robot before. In this paper, we propose a learning approach for placing multiple objects in different placing areas in a scene. Given point-clouds of the objects and the scene, we design appropriate features and use a graphical model to encode various properties, such as the stacking of objects, stability, object-area relationship and common placing constraints. The inference in our model is an integer linear program, which we solve efficiently via an linear programming relaxation. We extensively evaluate our approach on 98 objects from 16 categories being placed into 40 areas. Our robotic experiments show a success rate of 98% in placing known objects and 82% in placing new objects stably. We use our method on our robots for performing tasks such as loading several dish-racks, a bookshelf and a fridge with multiple items.