A gradient approach for smartly allocating computing budget for discrete event simulation
WSC '96 Proceedings of the 28th conference on Winter simulation
Simulation Budget Allocation for Further Enhancing theEfficiency of Ordinal Optimization
Discrete Event Dynamic Systems
New Two-Stage and Sequential Procedures for Selecting the Best Simulated System
Operations Research
ACM Transactions on Modeling and Computer Simulation (TOMACS)
New developments in ranking and selection: an empirical comparison of the three main approaches
WSC '05 Proceedings of the 37th conference on Winter simulation
Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
A Knowledge-Gradient Policy for Sequential Information Collection
SIAM Journal on Control and Optimization
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
The conjunction of the knowledge gradient and the economic approach to simulation selection
Winter Simulation Conference
Consistency of Sequential Bayesian Sampling Policies
SIAM Journal on Control and Optimization
Sequential Sampling with Economics of Selection Procedures
Management Science
Optimization via simulation with Bayesian statistics and dynamic programming
Proceedings of the Winter Simulation Conference
Sequential screening: a Bayesian dynamic programming analysis of optimal group-splitting
Proceedings of the Winter Simulation Conference
Value of information methods for pairwise sampling with correlations
Proceedings of the Winter Simulation Conference
Guessing preferences: a new approach to multi-attribute ranking and selection
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
We consider the ranking and selection of normal means in a fully sequential Bayesian context. By considering the sampling and stopping problems jointly rather than separately, we derive a new composite stopping/sampling rule. The sampling component of the derived composite rule is the same as the previously introduced LL1 sampling rule, but the stopping rule is new. This new stopping rule significantly improves the performance of LL1 as compared to its performance under the best other generally known adaptive stopping rule, EOC Bonf, outperforming it in every case tested.