Training with noise is equivalent to Tikhonov regularization
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Letters: Convex incremental extreme learning machine
Neurocomputing
OP-ELM: Theory, Experiments and a Toolbox
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
Batch intrinsic plasticity for extreme learning machines
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
A gradient rule for the plasticity of a neuron’s intrinsic excitability
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Comments on “The Extreme Learning Machine”
IEEE Transactions on Neural Networks
Reply to “Comments on “The Extreme Learning Machine””
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Extreme learning machines are randomly initialized single-hidden layer feed-forward neural networks where the training is restricted to the output weights in order to achieve fast learning with good performance. This contribution shows how batch intrinsic plasticity, a novel and efficient scheme for input specific tuning of non-linear transfer functions, and ridge regression can be combined to optimize extreme learning machines without searching for a suitable hidden layer size. We show that our scheme achieves excellent performance on a number of standard regression tasks and regression applications from robotics.