SteerBench: a benchmark suite for evaluating steering behaviors

  • Authors:
  • Shawn Singh;Mubbasir Kapadia;Petros Faloutsos;Glenn Reinman

  • Affiliations:
  • UCLA Depatment of Computer Science, Boelter Hall 4531-F, Los Angeles, CA 90095-1596, USA;-;-;-

  • Venue:
  • Computer Animation and Virtual Worlds - International Workshop Motion in Games (MIG08)
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Steering is a challenging task, required by nearly all agents in virtual worlds. There is a large and growing number of approaches for steering, and it is becoming increasingly important to ask a fundamental question: how can we objectively compare steering algorithms? To our knowledge, there is no standard way of evaluating or comparing the quality of steering solutions. This paper presents SteerBench: a benchmark framework for objectively evaluating steering behaviors for virtual agents. We propose a diverse set of test cases, metrics of evaluation, and a scoring method that can be used to compare different steering algorithms. Our framework can be easily customized by a user to evaluate specific behaviors and new test cases. We demonstrate our benchmark process on two example steering algorithms, showing the insight gained from our metrics. We hope that this framework can grow into a standard for steering evaluation. Copyright © 2009 John Wiley & Sons, Ltd. Existing work in agent steering behaviors is usually evaluated subjectively on a limited number of scenarios, which will not be enough as the field grows more mature. SteerBench consists of a suite of test cases, detailed metrics, and a method of objectively scoring steering behaviors. We demonstrate the scoring process, customizability, and detailed information that SteerBench provides.