Neural networks and the bias/variance dilemma
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Principles of Neurocomputing for Science and Engineering
Principles of Neurocomputing for Science and Engineering
An approach to guaranteeing generalisation in neural networks
Neural Networks
Letters: Convex incremental extreme learning machine
Neurocomputing
Induction of multiple fuzzy decision trees based on rough set technique
Information Sciences: an International Journal
Radial Basis Function network learning using localized generalization error bound
Information Sciences: an International Journal
Improving generalization of fuzzy IF-THEN rules by maximizing fuzzy entropy
IEEE Transactions on Fuzzy Systems
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
OP-ELM: optimally pruned extreme learning machine
IEEE Transactions on Neural Networks
Learning capability and storage capacity of two-hidden-layer feedforward networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hybrid extreme rotation forest
Neural Networks
Let a biogeography-based optimizer train your Multi-Layer Perceptron
Information Sciences: an International Journal
Hi-index | 0.01 |
The initial localized generalization error model (LGEM) aims to find an upper bound of error between a target function and a radial basis function neural network (RBFNN) within a neighborhood of the training samples. The contribution of LGEM can be briefly described as that the generalization error is less than or equal to the summation of three terms: training error, stochastic sensitivity measure (SSM), and a constant. This paper extends the initial LGEM to a new LGEM model for single-hidden layer feed-forward neural networks (SLFNs) trained with extreme learning machine (ELM) which is a type of new training algorithms without iterations. The development of this extended LGEM can provide some useful guidelines for improving the generalization ability of SLFNs trained with ELM. An algorithm for architecture selection of the SLFNs is also proposed based on the extended LGEM. Experimental results on a number of benchmark data sets show that an approximately optimal architecture in terms of number of neurons of a SLFN can be found using our method. Furthermore, the experimental results on eleven UCI data sets show that the proposed method is effective and efficient.