Regularization by Truncated Total Least Squares
SIAM Journal on Scientific Computing
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
ACM Transactions on Mathematical Software (TOMS)
Low-Rank Matrix Approximation Using the Lanczos Bidiagonalization Process with Applications
SIAM Journal on Scientific Computing
Choosing Regularization Parameters in Iterative Methods for Ill-Posed Problems
SIAM Journal on Matrix Analysis and Applications
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
A tutorial on support vector regression
Statistics and Computing
Tikhonov Regularization with a Solution Constraint
SIAM Journal on Scientific Computing
Augmented Implicitly Restarted Lanczos Bidiagonalization Methods
SIAM Journal on Scientific Computing
Design and Analysis of Experiments
Design and Analysis of Experiments
Fast generalized cross-validation algorithm for sparse model learning
Neural Computation
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Letters: Convex incremental extreme learning machine
Neurocomputing
Greedy Tikhonov regularization for large linear ill-posed problems
International Journal of Computer Mathematics - Fast Iterative and Preconditioning Methods for Linear and Non-Linear Systems
Rapid and brief communication: Evolutionary extreme learning machine
Pattern Recognition
Real-time learning capability of neural networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks
IEEE Transactions on Neural Networks
On the performance of the µ-GA extreme learning machines in regression problems
IWANN'11 Proceedings of the 11th international conference on Artificial neural networks conference on Advances in computational intelligence - Volume Part II
Robust extreme learning machine
Neurocomputing
Subspace echo state network for multivariate time series prediction
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part V
Hi-index | 0.02 |
There are two problems preventing the further development of extreme learning machine (ELM). First, the ill-conditioning of hidden layer output matrix reduces the stability of ELM. Second, the complexity of singular value decomposition (SVD) for computing Moore-Penrose generalized inverse limits the learning speed of ELM. For these two problems, this paper proposes the partial Lanczos ELM (PL-ELM) which employs the hybrid of partial Lanczos bidiagonalization and SVD to compute output weights. Experimental results indicate that, compared with ELM, PL-ELM not only effectively improves the stability and generalization performance but also raises the learning speed.