Evaluating autonomous ground-robots

  • Authors:
  • Anthony Finn;Adam Jacoff;Mike Del Rose;Bob Kania;Jim Overholt;Udam Silva;Jon Bornstein

  • Affiliations:
  • Defence & Systems Institute (DASI), University of South Australia, Mawson Lakes, SA 5095, Australia;National Institute for Standards and Technology (NIST), Intelligent Systems Division, Gaithersburg, Maryland 20899-8230;Tank Automotive Research Development & Engineering Centre (TARDEC), Warren, Michigan 48397-5000;Tank Automotive Research Development & Engineering Centre (TARDEC), Warren, Michigan 48397-5000;Tank Automotive Research Development & Engineering Centre (TARDEC), Warren, Michigan 48397-5000;Communications, Electronics, Research & Development Center (CERDEC), Ft. Monmouth, New Jersey 07703;Army Research Laboratory (ARL), Powder Mill Road, Adelphi, Maryland 20783-1197

  • Venue:
  • Journal of Field Robotics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The robotics community benefits from common test methods and metrics of performance to focus their research. As a result, many performance tests, competitions, demonstrations, and analyses have been devised to measure the autonomy, intelligence, and overall effectiveness of robots. These range from robot soccer (football) to measuring the performance of a robot in computer simulations. However, many resultant designs are narrowly focused oroptimized against the specific tasks under consideration. In the Multi-Autonomous Ground-robotic International Challenge (MAGIC) 2010, the need to transition the technology beyond the laboratory and into contexts for which it had not specifically been designed or tested meant that a performance evaluation scheme was needed that avoided domain-specific tests. However, the scheme still had to retain the capacity to deliver an impartial, consistent, objective, and evidence-based assessment that rewarded individual and multivehicle autonomy. It was also important to maximize the understanding and outcomes for technologists, sponsors, and potential usersgained through after-action review. The need for real-time, simultaneous, and continuous tracking of multiple interacting entities in an urban environment and over 250,000 square meters in real time compounded the complexity of the task. This paper describes the scheme used to progressively down-select and finally rank the teams competing in this complex and “operationally realistic” challenge. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.