Using the Kinect as a navigation sensor for mobile robotics

  • Authors:
  • Ayrton Oliver;Steven Kang;Burkhard C. Wünsche;Bruce MacDonald

  • Affiliations:
  • University of Auckland, Auckland, New Zealand;University of Auckland, Auckland, New Zealand;University of Auckland, Auckland, New Zealand;University of Auckland, Auckland, New Zealand

  • Venue:
  • Proceedings of the 27th Conference on Image and Vision Computing New Zealand
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Localisation and mapping are the key requirements in mobile robotics to accomplish navigation. Frequently laser scanners are used, but they are expensive and only provide 2D mapping capabilities. In this paper we investigate the suitability of the Xbox Kinect optical sensor for navigation and simultaneous localisation and mapping. We present a prototype which uses the Kinect to capture 3D point cloud data of the external environment. The data is used in a 3D SLAM to create 3D models of the environment and localise the robot in the environment. By projecting the 3D point cloud into a 2D plane, we then use the Kinect sensor data for a 2D SLAM algorithm. We compare the performance of Kinect-based 2D and 3D SLAM algorithm with traditional solutions and show that the use of the Kinect sensor is viable. However, its smaller field of view and depth range and the higher processing requirements for the resulting sensor data limit its range of applications in practice.