Updating the inverse of a matrix
SIAM Review
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Neural networks and the bias/variance dilemma
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Matrix computations (3rd ed.)
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A Greedy Training Algorithm for Sparse Least-Squares Support Vector Machines
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Exact simplification of support vector solutions
The Journal of Machine Learning Research
Bounds on Error Expectation for Support Vector Machines
Neural Computation
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Efficient tuning of SVM hyperparameters using radius/margin bound and iterative algorithms
IEEE Transactions on Neural Networks
Cumulus cloud synthetic rendering techniques and their evaluations
Machine Graphics & Vision International Journal
Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters
The Journal of Machine Learning Research
Expert Systems with Applications: An International Journal
Model selection for the LS-SVM. Application to handwriting recognition
Pattern Recognition
Help-training semi-supervised LS-SVM
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Optimized fixed-size kernel models for large data sets
Computational Statistics & Data Analysis
A Least-squares Approach to Direct Importance Estimation
The Journal of Machine Learning Research
Efficient hold-out for subset of regressors
ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
Non-invasive activation times estimation using 3D echocardiography
STACOM'10/CESC'10 Proceedings of the First international conference on Statistical atlases and computational models of the heart, and international conference on Cardiac electrophysiological simulation challenge
An optimal method for prediction and adjustment on byproduct gas holder in steel industry
Expert Systems with Applications: An International Journal
On Learning and Cross-Validation with Decomposed Nyström Approximation of Kernel Matrix
Neural Processing Letters
A Takagi-Sugeno type neuro-fuzzy network for determining child anemia
Expert Systems with Applications: An International Journal
Consistency of functional learning methods based on derivatives
Pattern Recognition Letters
Help-Training for semi-supervised support vector machines
Pattern Recognition
Leave-one-out manifold regularization
Expert Systems with Applications: An International Journal
Resampling methods for meta-model validation with recommendations for evolutionary computation
Evolutionary Computation
Nyström approximate model selection for LSSVM
PAKDD'12 Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part I
Revised mutual information approach for german text sentiment classification
Proceedings of the 22nd international conference on World Wide Web companion
Efficient sparse least squares support vector machines for pattern classification
Computers & Mathematics with Applications
Hi-index | 0.01 |
Leave-one-out cross-validation has been shown to give an almost unbiased estimator of the generalisation properties of statistical models, and therefore provides a sensible criterion for model selection and comparison. In this paper we show that exact leave-one-out cross-validation of sparse Least-Squares Support Vector Machines (LS-SVMs) can be implemented with a computational complexity of only O(ln2) floating point operations, rather than the O(l2n2) operations of a naïve implementation, where l is the number of training patterns and n is the number of basis vectors. As a result, leave-one-out cross-validation becomes a practical proposition for model selection in large scale applications. For clarity the exposition concentrates on sparse least-squares support vector machines in the context of non-linear regression, but is equally applicable in a pattern recognition setting.