A new class of nature-inspired algorithms for self-adaptive peer-to-peer computing
ACM Transactions on Autonomous and Adaptive Systems (TAAS)
Multivariate analysis applied in Bayesian metareasoning
WSEAS TRANSACTIONS on SYSTEMS
Cross-disciplinary perspectives on meta-learning for algorithm selection
ACM Computing Surveys (CSUR)
Data mining for simulation algorithm selection
Proceedings of the 2nd International Conference on Simulation Tools and Techniques
An Efficient and Adaptive Mechanism for Parallel Simulation Replication
PADS '09 Proceedings of the 2009 ACM/IEEE/SCS 23rd Workshop on Principles of Advanced and Distributed Simulation
A method for Bayesian meta-inference applying multiple regressions
ICS'08 Proceedings of the 12th WSEAS international conference on Systems
Optimization of heuristic search using recursive algorithm selection and reinforcement learning
Annals of Mathematics and Artificial Intelligence
Using CBR to select solution strategies in constraint programming
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
A learning-based algorithm selection meta-reasoner for the real-time MPE problem
AI'04 Proceedings of the 17th Australian joint conference on Advances in Artificial Intelligence
Increasing the efficiency of quicksort using a neural network based algorithm selection model
Information Sciences: an International Journal
Hi-index | 0.01 |
The algorithm selection problem aims at selecting the best algorithm for a given computational problem instance according to some characteristics of the instance. In this dissertation, we first introduce some results from theoretical investigation of the algorithm selection problem. We show, by Rice's theorem, the nonexistence of an automatic algorithm selection program based only on the description of the input instance and the competing hardness and algorithm performance based on Kolmogorov complexity to show that algorithm selection for search is also incomputable. Driven by the theoretical results, we propose a machine learning-based inductive approach using experimental algorithmic methods and machine learning techniques to solve the algorithm selection problem. Experimentally, we have applied the proposed methodology to algorithm selection for sorting and the MPE problem. In sorting, instances with an existing order are easier for some algorithms. We have studied different presortedness measures, designed algorithms to generate permutations with a specified existing order uniformly at random, and applied various learning algorithms to induce sorting algorithm selection models from runtime experimental results. In the MPE problem, the instance characteristics we have studied include size and topological type of the network, network connectedness, skewness of the distributions in Conditional Probability Tables (CPTs), and the proportion and distribution of evidence variables. The MPE algorithms considered include an exact algorithm (clique-tree propagation), two stochastic sampling algorithms (MCMC Gibbs sampling and importance forward sampling), two search-based algorithms (multi-restart hill-climbing and tabu search), and one hybrid algorithm combining both sampling and search (ant colony optimization). Another major contribution of this dissertation is the discovery of multifractal properties of the joint probability distributions of Bayesian networks. With sufficient asymmetry in individual prior and conditional probability distributions, the joint distribution is not only highly skewed, but it also has clusters of high-probability instantiations at all scales. We present a two phase hybrid random sampling and search algorithm to solve the MPE problem exploiting this clustering property. Since the MPE problem (decision version) is NP-complete, the multifractal meta-heuristic can be applied to solve other NP-hard combinatorial optimization problems as well.