Spatially unconstrained, gesture-based human-robot interaction

  • Authors:
  • Guillaume Doisy;Aleksandar Jevtić;Saša Bodiroža

  • Affiliations:
  • Ben-Gurion University of the Negev, Beersheva, Israel;Robosoft, Bidart, France;Humboldt-Universität zu Berlin, Berlin, Germany

  • Venue:
  • Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

For a human-robot interaction to take place, a robot needs to perceive humans. The space where a robot can perceive humans is restrained by the limitations of robot's sensors. These restrictions can be circumvented by the use of external sensors, like in intelligent environments; otherwise humans have to ensure that they can be perceived. With the robotic platform presented here, the roles are reversed and the robot autonomously ensures that the human is within the area perceived by the robot. This is achieved by a combination of hardware and algorithms capable of autonomously tracking the person, estimating their position and following them, while recognizing their gestures and moving through space.