Selecting and commanding groups in a multi-robot vision based system

  • Authors:
  • Brian Milligan;Greg Mori;Richard T. Vaughan

  • Affiliations:
  • Simon Fraser University, Burnaby, BC, Canada;Simon Fraser University, Burnaby, BC, Canada;Simon Fraser University, Burnaby, BC, Canada

  • Venue:
  • Proceedings of the 6th international conference on Human-robot interaction
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a novel method for a human user to select groups of robots without using any external instruments. We use computer vision techniques to read hand gestures from a user and use the hand gesture information to select single or multiple robots from a population and assign them to a task. To select robots the user simply draws a circle in the air around the robots that the user wants to command. Once the user selects the group of robots, he or she can send them to a location by pointing to a target location. To achieve this we use cameras mounted on mobile robots to find the user's face and then track his or her hand. Our method exploits an observation from human-robot interaction on pointing, which found a human's target when pointing is best inferred using the line from the human's eyes to the user's extended hand [1]. When circling robots the projected eye-to-hand lines forms a cone-like shape that envelops the selected robots. From a 2D camera mounted on the robot, this cone is seen with the user's face as the vertex and the hand movements as a circular slice of the cone. We show in the video how the robots can tell if they have been selected by testing to see if the face is within the circle made by the hand. If the face is within the circle then the robot was selected, if the face is outside the circle it was not selected. Following selection the robots then read a command by looking for a pointing gesture, which is detected by an outreached hand. From the pointing gesture the robots collectively infer which target is pointing at by calculating the distance and direction that the hand moved to relative to the face. The selected robots then travel to the target, and unselected robots can then be selected and commanded as desired. The robots communicate their state to the user through LED lights on the robots chassis. When a robot is searching for the user's face the LEDs flash to get the user's attention (as it is easiest to find frontal faces). When the robots find the users face the lights become a solid yellow to indicate that they are ready to be selected. When selected, the robots' LEDs turn blue to indicate they can now be commanded. Once robots are sent off to a location, remaining robots can then be selected and assigned another task. We demonstrate this method working on low powered Atom Netbooks and off the shelf USB web cameras. This shows the first working implementation of a system that allows a human to select and command groups of robots with out using any external instruments.