An empirical comparison of priority-queue and event-set implementations
Communications of the ACM
Analyzing algorithms by simulation: variance reduction techniques and simulation speedups
ACM Computing Surveys (CSUR)
Phase transitions and the search problem
Artificial Intelligence - Special volume on frontiers in problem solving: phase transitions and complexity
The influence of caches on the performance of sorting
SODA '97 Proceedings of the eighth annual ACM-SIAM symposium on Discrete algorithms
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Distributed simulation performance data mining
Future Generation Computer Systems - I. High Performance Numerical Methods and Applications. II. Performance Data Mining: Automated Diagnosis, Adaption, and Optimization
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Statistical Models for Automatic Performance Tuning
ICCS '01 Proceedings of the International Conference on Computational Sciences-Part I
Dynamic Load-Balancing for BSP Time Warp
SS '02 Proceedings of the 35th Annual Simulation Symposium
An Adaptive Algorithm Selection Framework
Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques
Statistical Models for Empirical Search-Based Performance Tuning
International Journal of High Performance Computing Applications
DS-RT '05 Proceedings of the 9th IEEE International Symposium on Distributed Simulation and Real-Time Applications
Learning dynamic algorithm portfolios
Annals of Mathematics and Artificial Intelligence
ANSS '07 Proceedings of the 40th Annual Simulation Symposium
Communications of the ACM
An Algorithm Selection Approach for Simulation Systems
Proceedings of the 22nd Workshop on Principles of Advanced and Distributed Simulation
The event queue problem and PDevs
SpringSim '07 Proceedings of the 2007 spring simulation multiconference - Volume 2
Towards realistic file-system benchmarks with CodeMRI
ACM SIGMETRICS Performance Evaluation Review
Large-Scale Design Space Exploration of SSA
CMSB '08 Proceedings of the 6th International Conference on Computational Methods in Systems Biology
Stochastic kriging for simulation metamodeling
Proceedings of the 40th Conference on Winter Simulation
Assessing the quality, success, and utility of M&S research
Proceedings of the 40th Conference on Winter Simulation
A flexible and scalable experimentation layer
Proceedings of the 40th Conference on Winter Simulation
Data mining for simulation algorithm selection
Proceedings of the 2nd International Conference on Simulation Tools and Techniques
Parameterized Complexity: The Main Ideas and Some Research Frontiers
ISAAC '01 Proceedings of the 12th International Symposium on Algorithms and Computation
Scalable Time Warp on Blue Gene Supercomputers
PADS '09 Proceedings of the 2009 ACM/IEEE/SCS 23rd Workshop on Principles of Advanced and Distributed Simulation
An Efficient and Adaptive Mechanism for Parallel Simulation Replication
PADS '09 Proceedings of the 2009 ACM/IEEE/SCS 23rd Workshop on Principles of Advanced and Distributed Simulation
Where the really hard problems are
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
Multi-armed bandit algorithms and empirical evaluation
ECML'05 Proceedings of the 16th European conference on Machine Learning
Exploring the performance of spatial stochastic simulation algorithms
Journal of Computational Physics
SafeBTW: A Scalable Optimistic Yet Non-risky Synchronization Algorithm
PADS '12 Proceedings of the 2012 ACM/IEEE/SCS 26th Workshop on Principles of Advanced and Distributed Simulation
Configuring simulation algorithms with ParamILS
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
Simulation algorithm implementations are usually evaluated by experimental performance analysis. To conduct such studies is a challenging and time-consuming task, as various impact factors have to be controlled and the resulting algorithm performance needs to be analyzed. This problem is aggravated when it comes to comparing many alternative implementations for a multitude of benchmark model setups. We present an architecture that supports the automated execution of performance evaluation experiments on several levels. Desirable benchmark model properties are motivated, and the quasi-steady state property of such models is exploited for simulation end time calibration, a simple technique to save computational effort in simulator performance comparisons. The overall mechanism is quite flexible and can be easily adapted to the various requirements that different kinds of performance studies impose. It is able to speed up performance experiments significantly, which is shown by a simple performance study.