Virtual sensors for human concepts-Building detection by an outdoor mobile robot

  • Authors:
  • Martin Persson;Tom Duckett;Achim Lilienthal

  • Affiliations:
  • Center for Applied Autonomous Sensor Systems, Department of Technology, Örebro University, Örebro, Sweden;Department of Computing and Informatics, University of Lincoln, Lincoln, UK;Center for Applied Autonomous Sensor Systems, Department of Technology, Örebro University, Örebro, Sweden

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.