Multilayer feedforward networks are universal approximators
Neural Networks
What size net gives valid generalization?
Neural Computation
The cascade-correlation learning architecture
Advances in neural information processing systems 2
C4.5: programs for machine learning
C4.5: programs for machine learning
A penalty-function approach for pruning feedforward neural networks
Neural Computation
Using Model Trees for Classification
Machine Learning
Algorithm 500: Minimization of Unconstrained Multivariate Functions [E4]
ACM Transactions on Mathematical Software (TOMS)
Remark on “Algorithm 500: Minimization of Unconstrained Multivariate Functions [E4]”
ACM Transactions on Mathematical Software (TOMS)
Neural-network feature selector
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
CARVE-a constructive algorithm for real-valued examples
IEEE Transactions on Neural Networks
Exploring constructive cascade networks
IEEE Transactions on Neural Networks
Constructive neural-network learning algorithms for pattern classification
IEEE Transactions on Neural Networks
A method of face recognition based on fuzzy clustering and parallel neural networks
Signal Processing - Special section: Advances in signal processing-assisted cross-layer designs
Hierarchical classifier with overlapping class groups
Expert Systems with Applications: An International Journal
Optimizing number of hidden neurons in neural networks
AIAP'07 Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications
Hierarchical Rules for a Hierarchical Classifier
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part I
Linear potential proximal support vector machines for pattern classification
Optimization Methods & Software - Mathematical programming in data mining and machine learning
IEICE - Transactions on Information and Systems
A hierarchical classifier with growing neural gas clustering
ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
Artificial Intelligence in Medicine
PPAM'05 Proceedings of the 6th international conference on Parallel Processing and Applied Mathematics
A modular single-hidden-layer perceptron for letter recognition
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Neural network architecture selection: size depends on function complexity
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Role of function complexity and network size in the generalization ability of feedforward networks
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Using cross-validation for model parameter selection of sequential probability ratio test
Expert Systems with Applications: An International Journal
Comparing logistic regression, neural networks, c5.0 and m5′ classification techniques
MLDM'12 Proceedings of the 8th international conference on Machine Learning and Data Mining in Pattern Recognition
Expert Systems with Applications: An International Journal
Advances in Artificial Neural Systems
Hi-index | 0.01 |
This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network's predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples. Extensive experimental results show that the algorithm is effective in obtaining networks with predictive accuracy rates that are better than those obtained by state-of-the-art decision tree methods.