Web-enhanced object category learning for domestic robots

  • Authors:
  • Christian I. Penaloza;Yasushi Mae;Kenichi Ohara;Tomohito Takubo;Tatsuo Arai

  • Affiliations:
  • Graduate School of Engineering Science, Osaka University, Toyonaka, Japan 560-8531;Graduate School of Engineering Science, Osaka University, Toyonaka, Japan 560-8531;Graduate School of Engineering Science, Osaka University, Toyonaka, Japan 560-8531;Graduate School of Engineering Science, Osaka University, Toyonaka, Japan 560-8531;Graduate School of Engineering Science, Osaka University, Toyonaka, Japan 560-8531

  • Venue:
  • Intelligent Service Robotics
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a system architecture for domestic robots that allows them to learn object categories after one sample object was initially learned. We explore the situation in which a human teaches a robot a novel object, and the robot enhances such learning by using a large amount of image data from the Internet. The main goal of this research is to provide a robot with capabilities to enhance its learning while minimizing time and effort required for a human to train a robot. Our active learning approach consists of learning the object name using speech interface, and creating a visual object model by using a depth-based attention model adapted to the robot's personal space. Given the object's name (keyword), a large amount of object-related images from two main image sources (Google Images and the LabelMe website) are collected. We deal with the problem of separating good training samples from noisy images by performing two steps: (1) Similar image selection using a Simile Selector Classifier, and (2) non-real image filtering by implementing a variant of Gaussian Discriminant Analysis. After web image selection, object category classifiers are then trained and tested using different objects of the same category. Our experiments demonstrate the effectiveness of our robot learning approach.