Improved benchmarking for steering algorithms

  • Authors:
  • Mubbasir Kapadia;Matthew Wang;Glenn Reinman;Petros Faloutsos

  • Affiliations:
  • University of California, Los Angeles;University of California, Los Angeles;University of California, Los Angeles;University of California, Los Angeles

  • Venue:
  • MIG'11 Proceedings of the 4th international conference on Motion in Games
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The statistical analysis of multi-agent simulations requires a definitive set of benchmarks that represent the wide spectrum of challenging scenarios that agents encounter in dynamic environments, and a scoring method to objectively quantify the performance of a steering algorithm for a particular scenario. In this paper, we first recognize several limitations in prior evaluation methods. Next, we define a measure of normalized effort that penalizes deviation from desired speed, optimal paths, and collisions in a single metric. Finally, we propose a new set of benchmark categories that capture the different situations that agents encounter in dynamic environments and identify truly challenging scenarios for each category. We use our method to objectively evaluate and compare three state of the art steering approaches and one baseline reactive approach. Our proposed scoring mechanism can be used (a) to evaluate a single algorithm on a single scenario, (b) to compare the performance of an algorithm over different benchmarks, and (c) to compare different steering algorithms.