Future paths for integer programming and links to artificial intelligence
Computers and Operations Research - Special issue: Applications of integer programming
Multilayer feedforward networks are universal approximators
Neural Networks
The Strength of Weak Learnability
Machine Learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Advances in neural information processing systems 2
Approximation capabilities of multilayer feedforward networks
Neural Networks
A pruning method for the recursive least squared algorithm
Neural Networks
Second Order Derivatives for Network Pruning: Optimal Brain Surgeon
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
An introduction to ROC analysis
Pattern Recognition Letters - Special issue: ROC analysis in pattern recognition
A fast learning algorithm for deep belief nets
Neural Computation
Neural Network Theory
An empirical evaluation of deep architectures on problems with many factors of variation
Proceedings of the 24th international conference on Machine learning
A new adaptive merging and growing algorithm for designing artificial neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Regularization parameter estimation for feedforward neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Evolutionary neural networks for anomaly detection based on the behavior of a program
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An Ensemble-Based Incremental Learning Approach to Data Fusion
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Bagging and Boosting Negatively Correlated Neural Networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Information Theory
Capabilities of a four-layered feedforward neural network: four layers versus three
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A new evolutionary system for evolving artificial neural networks
IEEE Transactions on Neural Networks
Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
Constructive neural-network learning algorithms for pattern classification
IEEE Transactions on Neural Networks
Modified cascade-correlation learning for classification
IEEE Transactions on Neural Networks
A new pruning heuristic based on variance analysis of sensitivity information
IEEE Transactions on Neural Networks
Neural-network construction and selection in nonlinear modeling
IEEE Transactions on Neural Networks
Constructive feedforward neural networks using Hermite polynomial activation functions
IEEE Transactions on Neural Networks
A node pruning algorithm based on a Fourier amplitude sensitivity test method
IEEE Transactions on Neural Networks
An Optimization Methodology for Neural Network Weights and Architectures
IEEE Transactions on Neural Networks
Use of a quasi-Newton method in a feedforward neural network construction algorithm
IEEE Transactions on Neural Networks
An efficient collaborative recommender system based on k-separability
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
A parallel evolving algorithm for flexible neural tree
Parallel Computing
A hybrid neural network model based reinforcement learning agent
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
A self learning rough fuzzy neural network classifier for mining temporal patterns
Proceedings of the International Conference on Advances in Computing, Communications and Informatics
Privacy-preserving back-propagation and extreme learning machine algorithms
Data & Knowledge Engineering
Swarm optimization and Flexible Neural Tree for microarray data classification
Proceedings of the Second International Conference on Computational Science, Engineering and Information Technology
Hi-index | 0.00 |
The generalization ability of artificial neural networks (ANNs) is greatly dependent on their architectures. Constructive algorithms provide an attractive automatic way of determining a near-optimal ANN architecture for a given problem. Several such algorithms have been proposed in the literature and shown their effectiveness. This paper presents a new constructive algorithm (NCA) in automatically determining ANN architectures. Unlike most previous studies on determining ANN architectures, NCA puts emphasis on architectural adaptation and functional adaptation in its architecture determination process. It uses a constructive approach to determine the number of hidden layers in an ANN and of neurons in each hidden layer. To achieve functional adaptation, NCA trains hidden neurons in the ANN by using different training sets that were created by employing a similar concept used in the boosting algorithm. The purpose of using different training sets is to encourage hidden neurons to learn different parts or aspects of the training data so that the ANN can learn the whole training data in a better way. In this paper, the convergence and computational issues of NCA are analytically studied. The computational complexity of NCA is found to be O(W × Pt × τ), where W is the number of weights in the ANN, Pt is the number of training examples, and τ is the number of training epochs. This complexity has the same order as what the backpropagation learning algorithm requires for training a fixed ANN architecture. A set of eight classification and two approximation benchmark problems was used to evaluate the performance of NCA. The experimental results show that NCA can produce ANN architectures with fewer hidden neurons and better generalization ability compared to existing constructive and nonconstructive algorithms.