Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Similarity metric learning for a variable-kernel classifier
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Bayesian Classification With Gaussian Processes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
In Defense of One-Vs-All Classification
The Journal of Machine Learning Research
A Bayesian Approach to Joint Feature Selection and Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Gradient-Based Optimization of Hyperparameters
Neural Computation
Bounds on Error Expectation for Support Vector Machines
Neural Computation
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters
The Journal of Machine Learning Research
Adaptive spherical Gaussian kernel in sparse Bayesian learning framework for nonlinear regression
Expert Systems with Applications: An International Journal
Gaussian kernel optimization for pattern classification
Pattern Recognition
Sparse multinomial kernel discriminant analysis (sMKDA)
Pattern Recognition
Learning by local kernel polarization
Neurocomputing
Hierarchical Multi-view Fisher Discriminant Analysis
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part II
Stochastic orthogonal and nonorthogonal subspace basis pursuit
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Sparse Kernel ridge regression using backward deletion
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation
The Journal of Machine Learning Research
Kernel uncorrelated discriminant analysis for radar target recognition
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
Leave-one-out manifold regularization
Expert Systems with Applications: An International Journal
Leave-One-Out cross-validation based model selection for manifold regularization
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
Fast sparse approximation of extreme learning machine
Neurocomputing
Hi-index | 0.00 |
Kernel fisher discriminant analysis (KFD) is a successful approach to classification. It is well known that the key challenge in KFD lies in the selection of free parameters such as kernel parameters and regularization parameters. Here we focus on the feature-scaling kernel where each feature individually associates with a scaling factor. A novel algorithm, named FS-KFD, is developed to tune the scaling factors and regularization parameters for the feature-scaling kernel. The proposed algorithm is based on optimizing the smooth leave-one-out error via a gradient-descent method and has been demonstrated to be computationally feasible. FS-KFD is motivated by the following two fundamental facts: the leave-one-out error of KFD can be expressed in closed form and the step function can be approximated by a sigmoid function. Empirical comparisons on artificial and benchmark data sets suggest that FS-KFD improves KFD in terms of classification accuracy.