A Predictive Collision Avoidance Model for Pedestrian Simulation
MIG '09 Proceedings of the 2nd International Workshop on Motion in Games
Crowd modeling and simulation technologies
ACM Transactions on Modeling and Computer Simulation (TOMACS)
A modular framework for adaptive agent-based steering
I3D '11 Symposium on Interactive 3D Graphics and Games
Scenario space: characterizing coverage, quality, and failure of steering algorithms
SCA '11 Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Improved benchmarking for steering algorithms
MIG'11 Proceedings of the 4th international conference on Motion in Games
Towards a quantitative approach for comparing crowds
Computer Animation and Virtual Worlds
Toolkit for teaching steering behaviors for 3D human-like virtual agents (demonstration)
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
A statistical similarity measure for aggregate crowd dynamics
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
A pattern-based modeling framework for simulating human-like pedestrian steering behaviors
Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
Hi-index | 0.00 |
Steering is a challenging task, required by nearly all agents in virtual worlds. There is a large and growing number of approaches for steering, and it is becoming increasingly important to ask a fundamental question: how can we objectively compare steering algorithms? To our knowledge, there is no standard way of evaluating or comparing the quality of steering solutions. This paper presents SteerBench: a benchmark framework for objectively evaluating steering behaviors for virtual agents. We propose a diverse set of test cases, metrics of evaluation, and a scoring method that can be used to compare different steering algorithms. Our framework can be easily customized by a user to evaluate specific behaviors and new test cases. We demonstrate our benchmark process on two example steering algorithms, showing the insight gained from our metrics. We hope that this framework can grow into a standard for steering evaluation. Copyright © 2009 John Wiley & Sons, Ltd. Existing work in agent steering behaviors is usually evaluated subjectively on a limited number of scenarios, which will not be enough as the field grows more mature. SteerBench consists of a suite of test cases, detailed metrics, and a method of objectively scoring steering behaviors. We demonstrate the scoring process, customizability, and detailed information that SteerBench provides.