The structral complexity of flowgraphs
Graph theory with applications to algorithms and computer science
Simulated annealing: theory and applications
Simulated annealing: theory and applications
Software testing techniques (2nd ed.)
Software testing techniques (2nd ed.)
A Formal Analysis of the Fault-Detecting Ability of Testing Methods
IEEE Transactions on Software Engineering
Generating Software Test Data by Evolution
IEEE Transactions on Software Engineering
How to analyse evolutionary algorithms
Theoretical Computer Science - Natural computing
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Fitness Function Design To Improve Evolutionary Structural Testing
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
An Automated Framework for Structural Test-Data Generation
ASE '98 Proceedings of the 13th IEEE international conference on Automated software engineering
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Stochastic Local Search: Foundations & Applications
Stochastic Local Search: Foundations & Applications
Search-based software test data generation: a survey: Research Articles
Software Testing, Verification & Reliability
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Writing Scientific Software: A Guide to Good Style
Writing Scientific Software: A Guide to Good Style
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Software Testing Research: Achievements, Challenges, Dreams
FOSE '07 2007 Future of Software Engineering
Predictive models for the breeder genetic algorithm i. continuous parameter optimization
Evolutionary Computation
Observations in using parallel and sequential evolutionary algorithms for automatic software testing
Computers and Operations Research
Cross-disciplinary perspectives on meta-learning for algorithm selection
ACM Computing Surveys (CSUR)
Parameter Setting in Evolutionary Algorithms
Parameter Setting in Evolutionary Algorithms
Empirical hardness models: Methodology and a case study on combinatorial auctions
Journal of the ACM (JACM)
Restart Strategy Selection Using Machine Learning Techniques
SAT '09 Proceedings of the 12th International Conference on Theory and Applications of Satisfiability Testing
SATzilla: portfolio-based algorithm selection for SAT
Journal of Artificial Intelligence Research
Where the really hard problems are
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
Handbook of Metaheuristics
Multi-dimensional classification with Bayesian networks
International Journal of Approximate Reasoning
On parameter tuning in search based software engineering
SSBSE'11 Proceedings of the Third international conference on Search based software engineering
Classifier chains for multi-label classification
Machine Learning
Performance prediction and automated tuning of randomized and parametric algorithms
CP'06 Proceedings of the 12th international conference on Principles and Practice of Constraint Programming
A comparison of predictive measures of problem difficulty inevolutionary algorithms
IEEE Transactions on Evolutionary Computation
Approximating discrete probability distributions with dependence trees
IEEE Transactions on Information Theory
Bayesian chain classifiers for multidimensional classification
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
An evaluation of machine learning in algorithm selection for search problems
AI Communications - The Symposium on Combinatorial Search
Environmental Modelling & Software
Hi-index | 0.07 |
A fundamental question in the field of approximation algorithms, for a given problem instance, is the selection of the best (or a suitable) algorithm with regard to some performance criteria. A practical strategy for facing this problem is the application of machine learning techniques. However, limited support has been given in the literature to the case of more than one performance criteria, which is the natural scenario for approximation algorithms. We propose multidimensional Bayesian network (mBN) classifiers as a relatively simple, yet well-principled, approach for helping to solve this problem. Precisely, we relax the algorithm selection decision problem into the elucidation of the nondominated subset of algorithms, which contains the best. This formulation can be used in different ways to elucidate the main problem, each of which can be tackled with an mBN classifier. Namely, we deal with two of them: the prediction of the whole nondominated set and whether an algorithm is nondominated or not. We illustrate the feasibility of the approach for real-life scenarios with a case study in the context of Search Based Software Test Data Generation (SBSTDG). A set of five SBSTDG generators is considered and the aim is to assist a hypothetical test engineer in elucidating good generators to fulfil the branch testing of a given programme.