Advanced algorithms for neural networks: a C++ sourcebook
Advanced algorithms for neural networks: a C++ sourcebook
Regularization theory and neural networks architectures
Neural Computation
Pattern recognition using neural networks: theory and algorithms for engineers and scientists
Pattern recognition using neural networks: theory and algorithms for engineers and scientists
A bundle-Newton method for nonsmooth unconstrained minimization
Mathematical Programming: Series A and B
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
SIAM Journal on Optimization
Semi-supervised Learning of Tree-Structured RBF Networks Using Co-training
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Expert Systems with Applications: An International Journal
Solving a system of nonlinear integral equations by an RBF network
Computers & Mathematics with Applications
A quasisecant method for minimizing nonsmooth functions
Optimization Methods & Software - DEDICATED TO PROFESSOR VLADIMIR F. DEMYANOV ON THE OCCASION OF HIS 70TH BIRTHDAY
Hi-index | 0.00 |
In this paper, we propose a hybrid learning algorithm for the single hidden layer feedforward neural networks (SLFNs) for data classification. The proposed hybrid algorithm is a two-phase learning algorithm and is based on the quasisecant and the simulated annealing methods. First, the weights between the hidden layer and the output layer nodes (output layer weights) are adjusted by the quasisecant algorithm. Then the simulated annealing is applied for global attribute weighting. The weights between the input layer and the hidden layer nodes are fixed in advance and are not included in the learning process. The proposed two-phase learning of the network is a novel idea and is different from that of the existing ones. The numerical results on some benchmark data sets are also reported and these results are promising.