Learning the Common Structure of Data
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Exploiting Competitive Planner Performance
ECP '99 Proceedings of the 5th European Conference on Planning: Recent Advances in AI Planning
Goals and benchmarks for autonomic configuration recommenders
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Bayesian networks to predict data mining algorithm behavior in ubiquitous computing environments
MSM'10/MUSE'10 Proceedings of the 2010 international conference on Analysis of social media and ubiquitous data
Performance prediction and automated tuning of randomized and parametric algorithms
CP'06 Proceedings of the 12th international conference on Principles and Practice of Constraint Programming
Instance-Based parameter tuning via search trajectory similarity clustering
LION'05 Proceedings of the 5th international conference on Learning and Intelligent Optimization
Hi-index | 0.00 |
Search-based algorithms, like planners, schedulers and satisfiability solvers, are notorious for having numerous parameters with a wide choice of values that can affect their performance drastically. As a result, the users of these algorithms, who may not be search experts, spend a significant time in tuning the values of the parameters to get acceptable performance on their particular problem domains. In this paper, we present a learning-based approach for automatic tuning of search-based algorithms to help such users. The benefit of our methodology is that it handles diverse parameter types, performs effectively for a broad range of systematic as well as non-systematic search based solvers (the selected parameters could make the algorithms solve up to 100% problems while the bad parameters would lead to none being solved), incorporates user-specified performance criteria (Φ) and is easy to implement. Moreover, the selected parameter will satisfy Φ in the first try or the ranked candidates can be used along with Φ to minimize the number of times the parameter settings need to he adjusted until a problem is solved.