Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Learning and relearning in Boltzmann machines
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural network learning and expert systems
Neural network learning and expert systems
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Minimisation methods for training feedforward neural networks
Neural Networks
Structural adaptation and generalization in supervised feed-forward networks
Journal of Artificial Neural Networks
Evolutionary computation: toward a new philosophy of machine intelligence
Evolutionary computation: toward a new philosophy of machine intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
Constructive Neural Networks
Evolving Arbitrarily Connected Feedforward Neural Networks via Genetic Algorithms
SBRN '10 Proceedings of the 2010 Eleventh Brazilian Symposium on Neural Networks
A new evolutionary system for evolving artificial neural networks
IEEE Transactions on Neural Networks
Perceptron-based learning algorithms
IEEE Transactions on Neural Networks
Backpropagation neural nets with one and two hidden layers
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Genetic evolution of the topology and weight distribution of neural networks
IEEE Transactions on Neural Networks
An evolutionary algorithm that constructs recurrent neural networks
IEEE Transactions on Neural Networks
Recurrent neural networks and robust time series prediction
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
In this work we present a constructive algorithm capable of producing arbitrarily connected feedforward neural network architectures for classification problems. Architecture and synaptic weights of the neural network should be defined by the learning procedure. The main purpose is to obtain a parsimonious neural network, in the form of a hybrid and dedicate linear/nonlinear classification model, which can guide to high levels of performance in terms of generalization. Though not being a global optimization algorithm, nor a population-based metaheuristics, the constructive approach has mechanisms to avoid premature convergence, by mixing growing and pruning processes, and also by implementing a relaxation strategy for the learning error. The synaptic weights of the neural networks produced by the constructive mechanism are adjusted by a quasi-Newton method, and the decision to grow or prune the current network is based on a mutual information criterion. A set of benchmark experiments, including artificial and real datasets, indicates that the new proposal presents a favorable performance when compared with alternative approaches in the literature, such as traditional MLP, mixture of heterogeneous experts, cascade correlation networks and an evolutionary programming system, in terms of both classification accuracy and parsimony of the obtained classifier.