Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Being a part of the crowd: towards validating VR crowds using presence
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Egocentric affordance fields in pedestrian steering
Proceedings of the 2009 symposium on Interactive 3D graphics and games
SteerBug: an interactive framework for specifying and detecting steering behaviors
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
SteerBench: a benchmark suite for evaluating steering behaviors
Computer Animation and Virtual Worlds - International Workshop Motion in Games (MIG08)
PLEdestrians: a least-effort approach to crowd simulation
Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
A modular framework for adaptive agent-based steering
I3D '11 Symposium on Interactive 3D Graphics and Games
Footstep navigation for dynamic crowds
Computer Animation and Virtual Worlds
Scenario space: characterizing coverage, quality, and failure of steering algorithms
SCA '11 Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Hi-index | 0.00 |
The statistical analysis of multi-agent simulations requires a definitive set of benchmarks that represent the wide spectrum of challenging scenarios that agents encounter in dynamic environments, and a scoring method to objectively quantify the performance of a steering algorithm for a particular scenario. In this paper, we first recognize several limitations in prior evaluation methods. Next, we define a measure of normalized effort that penalizes deviation from desired speed, optimal paths, and collisions in a single metric. Finally, we propose a new set of benchmark categories that capture the different situations that agents encounter in dynamic environments and identify truly challenging scenarios for each category. We use our method to objectively evaluate and compare three state of the art steering approaches and one baseline reactive approach. Our proposed scoring mechanism can be used (a) to evaluate a single algorithm on a single scenario, (b) to compare the performance of an algorithm over different benchmarks, and (c) to compare different steering algorithms.