Feature subset selection by Bayesian network-based optimization
Artificial Intelligence
Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning
The equation for response to selection and its use for prediction
Evolutionary Computation
Scalability problems of simple genetic algorithms
Evolutionary Computation
Fda -a scalable evolutionary algorithm for the optimization of additively decomposed functions
Evolutionary Computation
Evolutionary Bi-objective Learning with Lowest Complexity in Neural Networks: Empirical Comparisons
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part I
Hi-index | 0.00 |
This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments considered a feedforward neural network trained with standard backpropagation and 15 public-domain and artificial data sets. In most cases, the pruned networks seemed to have better or equal accuracy than the original fully-connected networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found large differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.