The nature of statistical learning theory
The nature of statistical learning theory
Using support vector machines for time series prediction
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Time Series Analysis, Forecasting and Control
Time Series Analysis, Forecasting and Control
Regularized Least Absolute Deviations Regression and an Efficient Algorithm for Parameter Tuning
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
The Journal of Machine Learning Research
An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
Letters: Convex incremental extreme learning machine
Neurocomputing
Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches
ECML '07 Proceedings of the 18th European conference on Machine Learning
OP-ELM: Theory, Experiments and a Toolbox
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Stochastic methods for l1 regularized loss minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
Rapid and brief communication: Evolutionary extreme learning machine
Pattern Recognition
On the sparseness of 1-norm support vector machines
Neural Networks
OP-ELM: optimally pruned extreme learning machine
IEEE Transactions on Neural Networks
Two-stage extreme learning machine for regression
Neurocomputing
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
In this paper, a novel 1-norm extreme learning machine (ELM) for regression and multiclass classification is proposed as a linear programming problem whose solution is obtained by solving its dual exterior penalty problem as an unconstrained minimization problem using a fast Newton method. The algorithm converges from any starting point and can be easily implemented in MATLAB. The main advantage of the proposed approach is that it leads to a sparse model representation meaning that many components of the optimal solution vector will become zero and therefore the decision function can be determined using much less number of hidden nodes in comparison to ELM. Numerical experiments were performed on a number of interesting real-world benchmark datasets and their results are compared with ELM using additive and radial basis function (RBF) hidden nodes, optimally pruned ELM (OP-ELM) and support vector machine (SVM) methods. Similar or better generalization performance of the proposed method on the test data over ELM, OP-ELM and SVM clearly illustrates its applicability and usefulness.