Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Pairwise classification and support vector machines
Advances in kernel methods
Polynomial-Time Decomposition Algorithms for Support Vector Machines
Machine Learning
Asymptotic behaviors of support vector machines with Gaussian kernel
Neural Computation
A Stochastic Optimization Approach for Parameter Tuning of Support Vector Machines
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
Fast SVM Training Algorithm with Decomposition on Very Large Data Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Some new indexes of cluster validity
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Electricity price forecasting based on support vector machine trained by genetic algorithm
IITA'09 Proceedings of the 3rd international conference on Intelligent information technology application
Information Sciences: an International Journal
Granular support vector machine based on mixed measure
Neurocomputing
A nested heuristic for parameter tuning in Support Vector Machines
Computers and Operations Research
Computers in Biology and Medicine
Hi-index | 0.01 |
Determining the kernel and error penalty parameters for support vector machines (SVMs) is very problem-dependent in practice. A popular method to deciding the kernel parameters is the grid search method. In the training process, classifiers are trained with different kernel parameters, and only one of the classifiers is required for the testing process. This makes the training process time-consuming. In this paper we propose using the inter-cluster distances in the feature spaces to choose the kernel parameters. Calculating such distance costs much less computation time than training the corresponding SVM classifiers; thus the proper kernel parameters can be chosen much faster. Experiment results show that the inter-cluster distance can choose proper kernel parameters with which the testing accuracy of trained SVMs is competitive to the standard ones, and the training time can be significantly shortened.