Numerical recipes: the art of scientific computing
Numerical recipes: the art of scientific computing
Regularization theory and neural networks architectures
Neural Computation
Partial least squares regression
Proceedings of the second international workshop on Recent advances in total least squares techniques and errors-in-variables modeling
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Bayesian learning for neural networks
Bayesian learning for neural networks
Kernel partial least squares regression in reproducing kernel hilbert space
The Journal of Machine Learning Research
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Kernel independent component analysis
The Journal of Machine Learning Research
A support vector machine formulation to PCA analysis and its kernel version
IEEE Transactions on Neural Networks
A study on reduced support vector machines
IEEE Transactions on Neural Networks
Locality preserving CCA with applications to data visualization and pose estimation
Image and Vision Computing
Model selection approaches for non-linear system identification: a review
International Journal of Systems Science
Online prediction of time series data with kernels
IEEE Transactions on Signal Processing
Sparse support vector regressors based on forward basis selection
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Sequential learning with LS-SVM for large-scale data sets
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
Expert Systems with Applications: An International Journal
Multi-threaded support vector machines for pattern recognition
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.02 |
Kernel based methods suffer from exceeding time and memory requirements when applied on large datasets since the involved optimization problems typically scale polynomially in the number of data samples. As a remedy, some least squares methods on one hand only reduce the number of parameters (for fast training), on the other hand only work on a reduced set (for fast evaluation). Departing from the Nystrom based feature approximation, via the fixed-size LS-SVM model, we propose a general regression framework, based on restriction of the search space to a subspace and a particular choice of basis vectors in feature space. In the general model both reduction aspects are unified and become explicit model choices. This allows to accommodate kernel Partial Least Squares and kernel Canonical Correlation analysis for regression with a sparse representation, which makes them applicable to large data sets, with little loss in accuracy.