Communications of the ACM
Information Processing Letters
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Tutorial on Practical Prediction Theory for Classification
The Journal of Machine Learning Research
Hi-index | 0.00 |
In this extended abstract, we look at the common practice of using optimization problem test suites to develop and/or evaluate optimization algorithms, and bring to bear on this practice a number of results available from computational learning theory. This enables optimization algorithm developers to express principled quantitative bounds on the likely performance of their algorithms on unseen problem instances, on the basis of details of their experimental design and empirical results on training or test instances. We first recap some relevant results from computational learning theory, and then describe how optimization development practice can be suitably recast in a way that enables these results to be applied. We then briefly discuss some related implications. An updated version of this article and associated material, including statistical tables relating to generalization bounds, are provided at http://is.gd/evalopt.