Updating the inverse of a matrix
SIAM Review
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
Boosting in the limit: maximizing the margin of learned ensembles
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Kernel principal component analysis
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Machine Learning
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
A Greedy Training Algorithm for Sparse Least-Squares Support Vector Machines
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Everything old is new again: a fresh look at historical approaches in machine learning
Everything old is new again: a fresh look at historical approaches in machine learning
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters
The Journal of Machine Learning Research
Sparse Kernel ridge regression using backward deletion
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Adaptive quasiconformal kernel discriminant analysis
Neurocomputing
Mechanic signal analysis based on the Haar-type orthogonal matrix
Expert Systems with Applications: An International Journal
Bounded influence support vector regression for robust single-model estimation
IEEE Transactions on Neural Networks
A learning-based audio watermarking scheme using kernel Fisher discriminant analysis
Digital Signal Processing
Hi-index | 0.02 |
Mika, Rätsch, Weston, Schölkopf and Müller [Mika, S., Rätsch, G., Weston, J., Schölkopf, B., & Müller, K.-R. (1999). Fisher discriminant analysis with kernels. In Neural networks for signal processing: Vol. IX (pp. 41-48). New York: IEEE Press] introduce a non-linear formulation of Fisher's linear discriminant, based on the now familiar ''kernel trick'', demonstrating state-of-the-art performance on a wide range of real-world benchmark datasets. In this paper, we extend an existing analytical expression for the leave-one-out cross-validation error [Cawley, G. C., & Talbot, N. L. C. (2003b). Efficient leave-one-out cross-validation of kernel Fisher discriminant classifiers. Pattern Recognition, 36(11), 2585-2592] such that the leave-one-out error can be re-estimated following a change in the value of the regularisation parameter with a computational complexity of only operations, which is substantially less than the operations required for the basic training algorithm. This allows the regularisation parameter to be tuned at an essentially negligible computational cost. This is achieved by performing the discriminant analysis in canonical form. The proposed method is therefore a useful component of a model selection strategy for this class of kernel machines that alternates between updates of the kernel and regularisation parameters. Results obtained on real-world and synthetic benchmark datasets indicate that the proposed method is competitive with model selection based on k-fold cross-validation in terms of generalisation, whilst being considerably faster.