A vision-based architecture for long-term human-robot interaction

  • Authors:
  • Christopher King;Xavier Palathingal;Monica Nicolescu;Mircea Nicolescu

  • Affiliations:
  • University of Nevada, Reno NV;University of Nevada, Reno NV;University of Nevada, Reno NV;University of Nevada, Reno NV

  • Venue:
  • IASTED-HCI '07 Proceedings of the Second IASTED International Conference on Human Computer Interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Advances in robotics research bring robots closer to real world applications. Although robots have become increasingly capable, productive interaction is still restricted to specialists in the field. In this paper, we propose an interactive architecture, based on visual capabilities, which allows robots to interact with people in a natural way, to deal with multiple users, and to be constantly aware of their surroundings. First, we endow our robot with visual capabilities, which allow it to detect when people are requesting to engage it in interaction. Second, we provide the robot with flexibility in dealing with multiple users, such as to accommodate multiple user-requests and task interruptions, over extended periods. The visual capabilities we propose allow the robot to identify multiple users, with multiple postures, in real-time and in dynamic environments, where both the robot and human users are moving. We demonstrate our approach on a Pioneer 3DX mobile robot, performing service tasks in a real-world environment.