Testing Finite-State Machines: State Identification and Verification
IEEE Transactions on Computers
A Racing Algorithm for Configuring Metaheuristics
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Towards an analytic framework for analysing the computation time of evolutionary algorithms
Artificial Intelligence
Advanced fitness landscape analysis and the performance of memetic algorithms
Evolutionary Computation - Special issue on magnetic algorithms
Automated Unique Input Output Sequence Generation for Conformance Testing of FSMs
The Computer Journal
Fine-Tuning of Algorithms Using Fractional Experimental Designs and Local Search
Operations Research
Crossover Can Be Constructive When Computing Unique Input Output Sequences
SEAL '08 Proceedings of the 7th International Conference on Simulated Evolution and Learning
SATzilla: portfolio-based algorithm selection for SAT
Journal of Artificial Intelligence Research
ParamILS: an automatic algorithm configuration framework
Journal of Artificial Intelligence Research
Autonomous operator management for evolutionary algorithms
Journal of Heuristics
Non-uniform mutation rates for problems with unknown solution lengths
Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms
Fitness-probability cloud and a measure of problem hardness for evolutionary algorithms
EvoCOP'11 Proceedings of the 11th European conference on Evolutionary computation in combinatorial optimization
A study on the extended unique input/output sequence
Information Sciences: an International Journal
Hi-index | 0.00 |
Unique Input Output (UIO) sequences are used in conformance testing of Finite state machines (FSMs). Evolutionary algorithms (EAs) have recently been employed to search UIOs. However, the problem of tuning evolutionary algorithm parameters remains unsolved. In this paper, a number of features of fitness landscapes were computed to characterize the UIO instance, and a set of EA parameter settings were labeled with either 'good' or 'bad' for each UIO instance, and then a predictor mapping features of a UIO instance to 'good' EA parameter settings is trained. For a given UIO instance, we use this predictor to find good EA parameter settings, and the experimental results have shown that the correct rate of predicting 'good' EA parameters was greater than 93%. Although the experimental study in this paper was carried out on the UIO problem, the paper actually addresses a very important issue, i.e., a systematic and principled method of tuning parameters for search algorithms. This is the first time that a systematic and principled framework has been proposed in Search-Based Software Engineering for parameter tuning, by using machine learning techniques to learn good parameter values.