The nature of statistical learning theory
The nature of statistical learning theory
Geometry and invariance in kernel based methods
Advances in kernel methods
Introduction to Algorithms
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Neighborhood Property--Based Pattern Selection for Support Vector Machines
Neural Computation
Structured large margin machines: sensitive to data distributions
Machine Learning
Induction of multiple fuzzy decision trees based on rough set technique
Information Sciences: an International Journal
Improving generalization of fuzzy IF-THEN rules by maximizing fuzzy entropy
IEEE Transactions on Fuzzy Systems
Simultaneous feature selection and classification using kernel-penalized support vector machines
Information Sciences: an International Journal
A criterion for optimizing kernel parameters in KBDA for image retrieval
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hybrid approach of selecting hyperparameters of support vector machine for regression
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Efficient tuning of SVM hyperparameters using radius/margin bound and iterative algorithms
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
A Geometrical Method to Improve Performance of the Support Vector Machine
IEEE Transactions on Neural Networks
Weighted Mahalanobis Distance Kernels for Support Vector Machines
IEEE Transactions on Neural Networks
Modeling and monitoring for handling nonlinear dynamic processes
Information Sciences: an International Journal
Information Sciences: an International Journal
Hi-index | 0.07 |
The performance of a kernel method often depends mainly on the appropriate choice of a kernel function. In this study, we present a data-dependent method for scaling the kernel function so as to optimize the classification performance of kernel methods. Instead of finding the support vectors in feature space, we first find the region around the separating boundary in input space, and subsequently scale the kernel function correspondingly. It is worth noting that the proposed method does not require a training step to enable a specified classification algorithm to find the boundary and can be applied to various classification methods. Experimental results using both artificial and real-world data are provided to demonstrate the robustness and validity of the proposed method.