Generating Linear Regression Rules from Neural Networks Using Local Least Squares Approximation
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Neural and Wavelet Network Models for Financial Distress Classification
Data Mining and Knowledge Discovery
Time-series forecasting using flexible neural tree model
Information Sciences: an International Journal
Computers and Electronics in Agriculture
Evolving model trees for mining data sets with continuous-valued classes
Expert Systems with Applications: An International Journal
A New Constructive Algorithm for Designing and Training Artificial Neural Networks
Neural Information Processing
Permutation Free Encoding Technique for Evolving Neural Networks
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Evolutionary product-unit neural networks classifiers
Neurocomputing
A new adaptive merging and growing algorithm for designing artificial neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Time-series forecasting using flexible neural tree model
Information Sciences: an International Journal
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Pruned neural networks for regression
PRICAI'00 Proceedings of the 6th Pacific Rim international conference on Artificial intelligence
Adaptive self-scaling non-monotone BFGS training algorithm for recurrent neural networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems
Neural Processing Letters
A constructive algorithm for wavelet neural networks
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Robotics and Computer-Integrated Manufacturing
Hi-index | 0.00 |
This paper describes an algorithm for constructing a single hidden layer feedforward neural network. A distinguishing feature of this algorithm is that it uses the quasi-Newton method to minimize the sequence of error functions associated with the growing network. Experimental results indicate that the algorithm is very efficient and robust. The algorithm was tested on two test problems. The first was the n-bit parity problem and the second was the breast cancer diagnosis problem from the University of Wisconsin Hospitals. For the n-bit parity problem, the algorithm was able to construct neural network having less than n hidden units that solved the problem for n=4,···,7. For the cancer diagnosis problem, the neural networks constructed by the algorithm had small number of hidden units and high accuracy rates on both the training data and the testing data