Multi-user multi-touch multi-robot command and control of multiple simulated robots

  • Authors:
  • Eric McCann;Sean McSheehy;Holly Yanco

  • Affiliations:
  • University of Massachusetts Lowell, Lowell, MA, USA;University of Massachusetts Lowell, Lowell, MA, USA;University of Massachusetts Lowell, Lowell, MA, USA

  • Venue:
  • HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This video demonstrates three users sharing control of eight simulated robots with a Microsoft Surface and two Apple iPads using our Multi-user Multi-touch Multi-robot Command and Control Interface. The command and control interfaces are all capable of moving their world camera through space, tasking one or more robots with a series of waypoints, and assuming manual control of a single robot for inspection of its sensors and tele-operation. They display full-screen images sent from their user's world camera, overlaid with icons that show the position and selection state of each robot in the camera's field of view, dots that indicate each robot's current destination, and rectangles that correspond to each other user's field of view. One multi-touch interface runs on a Microsoft Surface, and the others on Apple iPads; they all have the same functional capabilities, other than a few differences due to the form factor and touch sensing method used by the platforms. The Surface interface is able to interpret gestures that include more than just finger tips, such as placing both fists on the screen to make all robots stop and wait for new commands. As iPads sense touch capacitively, they do not support detection of such gestures. The Surface interface allows its user to move their world camera while simultaneously teleoperating one of the robots with our Dynamically Resizing Ergonomic and Multi-touch Controller (DREAM Controller) [1, 2]. On the iPads, however, the command and control mode and teleoperation mode are mutually exclusive. The robots are simulated in Microsoft Robotics Developer Studio. Each user's world camera has similar movement capabilities to a quad-copter. The UDP communications between users and robots are all handled by a single server that routes messages to the appropriate targets, allowing scalability of both the number of robots and users.