A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Training with noise is equivalent to Tikhonov regularization
Neural Computation
Efficient SVM Regression Training with SMO
Machine Learning
A tutorial on support vector regression
Statistics and Computing
Decomposition methods for linear support vector machines
Neural Computation
Efficient Model Selection for Kernel Logistic Regression
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Parallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP
Training Recurrent Networks by Evolino
Neural Computation
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Letters: Convex incremental extreme learning machine
Neurocomputing
Noise-Robust Automatic Speech Recognition Using a Predictive Echo State Network
IEEE Transactions on Audio, Speech, and Language Processing
IEEE Transactions on Information Theory
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Robust support vector regression networks for function approximation with outliers
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Support Vector Echo-State Machine for Chaotic Time-Series Prediction
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Reservoir method is applied to the feed-forward learning machines for nonlinear regression estimation. Inspired by the existing experience from extreme learning machine (ELM), the new method inherits the basic idea from support vector echo-state machines, but eliminates the internal feedback matrix to adapt for the feed-forward usage. Based on the analysis of nonlinearity in reservoir and regularization in readout weights, the parameters of input scaling and penalty regularization are taken as the hyper-parameters to characterize a static reservoir (ELM), and then a proper reservoir is identified on the @c-C plane based on a generalization error criterion. For outlier suppression, the regularized robust regression is applied in the reservoir feature space, and it leads to an efficient algorithm for large-scale problems, which can be solved by Cholesky decomposition. The proposed method is compared with the classical kernel method and ELM method on several benchmark nonlinear regression datasets, and the results indicate the method is comparable with the existing methods.