What makes an optimization problem hard?
Complexity
Artificial Intelligence
Density-Based Clustering in Spatial Databases: The Algorithm GDBSCAN and Its Applications
Data Mining and Knowledge Discovery
Meta-Learning by Landmarking Various Learning Algorithms
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Learning the Empirical Hardness of Optimization Problems: The Case of Combinatorial Auctions
CP '02 Proceedings of the 8th International Conference on Principles and Practice of Constraint Programming
Chained Lin-Kernighan for Large Traveling Salesman Problems
INFORMS Journal on Computing
Objective Function Features Providing Barriers to Rapid Global Optimization
Journal of Global Optimization
Evolving combinatorial problem instances that are difficult to solve
Evolutionary Computation
A review of metrics on permutations for search landscape analysis
Computers and Operations Research
Empirical hardness models: Methodology and a case study on combinatorial auctions
Journal of the ACM (JACM)
Problem difficulty analysis for particle swarm optimization: deception and modality
Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation
Phase transitions and backbones of the asymmetric traveling salesman problem
Journal of Artificial Intelligence Research
Where the really hard problems are
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
The backbone of the travelling salesperson
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Learning and Intelligent Optimization
An analysis of problem difficulty for a class of optimisation heuristics
EvoCOP'07 Proceedings of the 7th European conference on Evolutionary computation in combinatorial optimization
A genetic algorithm for the index selection problem
EvoWorkshops'03 Proceedings of the 2003 international conference on Applications of evolutionary computing
SATzilla-07: the design and analysis of an algorithm portfolio for SAT
CP'07 Proceedings of the 13th international conference on Principles and practice of constraint programming
Optimisation and generalisation: footprints in instance space
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
Understanding TSP difficulty by learning from evolved instances
LION'10 Proceedings of the 4th international conference on Learning and intelligent optimization
Feature selection by maximum marginal diversity: optimality and implications for visual recognition
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Review: Measuring instance difficulty for combinatorial optimization problems
Computers and Operations Research
Property analysis of symmetric travelling salesman problem instances acquired through evolution
EvoCOP'05 Proceedings of the 5th European conference on Evolutionary Computation in Combinatorial Optimization
Review: Measuring instance difficulty for combinatorial optimization problems
Computers and Operations Research
LION'12 Proceedings of the 6th international conference on Learning and Intelligent Optimization
Algorithm runtime prediction: Methods & evaluation
Artificial Intelligence
Towards objective measures of algorithm performance across instance space
Computers and Operations Research
Annals of Mathematics and Artificial Intelligence
Hi-index | 0.00 |
The suitability of an optimisation algorithm selected from within an algorithm portfolio depends upon the features of the particular instance to be solved. Understanding the relative strengths and weaknesses of different algorithms in the portfolio is crucial for effective performance prediction, automated algorithm selection, and to generate knowledge about the ideal conditions for each algorithm to influence better algorithm design. Relying on well-studied benchmark instances, or randomly generated instances, limits our ability to truly challenge each of the algorithms in a portfolio and determine these ideal conditions. Instead we use an evolutionary algorithm to evolve instances that are uniquely easy or hard for each algorithm, thus providing a more direct method for studying the relative strengths and weaknesses of each algorithm. The proposed methodology ensures that the meta-data is sufficient to be able to learn the features of the instances that uniquely characterise the ideal conditions for each algorithm. A case study is presented based on a comprehensive study of the performance of two heuristics on the Travelling Salesman Problem. The results show that prediction of search effort as well as the best performing algorithm for a given instance can be achieved with high accuracy.