Keeping the neural networks simple by minimizing the description length of the weights
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Towards designing artificial neural networks by evolution
Applied Mathematics and Computation - Special issue on articficial life and robotics
An effective use of crowding distance in multiobjective particle swarm optimization
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
An Effective Search Method for Neural Network Based Face Detection Using Particle Swarm Optimization
IEICE - Transactions on Information and Systems
MOPSO: a proposal for multiple objective particle swarm optimization
CEC '02 Proceedings of the Evolutionary Computation on 2002. CEC '02. Proceedings of the 2002 Congress - Volume 02
EMO'05 Proceedings of the Third international conference on Evolutionary Multi-Criterion Optimization
A fast and elitist multiobjective genetic algorithm: NSGA-II
IEEE Transactions on Evolutionary Computation
An evolutionary artificial neural networks approach for breast cancer diagnosis
Artificial Intelligence in Medicine
The minimum description length principle in coding and modeling
IEEE Transactions on Information Theory
Mutation-based genetic neural network
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper suggests an approach to neural network training through the simultaneous optimization of architectures and weights with a Particle Swarm Optimization (PSO)-based multiobjective algorithm. Most evolutionary computation-based training methods formulate the problem in a single objective manner by taking a weighted sum of the objectives from which a single neural network model is generated. Our goal is to determine whether Multiobjective Particle Swarm Optimization can train neural networks involving two objectives: accuracy and complexity. We propose rules for automatic deletion of unnecessary nodes from the network based on the following idea: a connection is pruned if its weight is less than the value of the smallest bias of the entire network. Experiments performed on benchmark datasets obtained from the UCI machine learning repository show that this approach provides an effective means for training neural networks that is competitive with other evolutionary computation-based methods.