Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Convex Optimization
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Combined SVM-Based Feature Selection and Classification
Machine Learning
Gradient-Based Adaptation of General Gaussian Kernels
Neural Computation
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
Pattern Recognition, Third Edition
Pattern Recognition, Third Edition
Gaussian kernel optimization for pattern classification
Pattern Recognition
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Face recognition using kernel direct discriminant analysis algorithms
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Nowadays most of the current kernel learning approaches are showing good results in small datasets and fail to scale to large ones. As such, it is necessary to develop faster kernel optimization algorithms that perform better with larger datasets, especially, for the ''Big Data'' applications. This paper presents a novel fast method to optimize the Gaussian kernel function for two-class pattern classification tasks, where it is desirable for the kernel machines to use an optimized kernel that adapts well to the input data and the learning tasks. We propose to optimize the Gaussian kernel function by using the formulated kernel target alignment criterion. By adopting the Euler-Maclaurin formula and the local and global extremal properties of the approximate kernel separability criterion, the approximate criterion function can be proved to have a determined global minimum point. Thus, when the approximate criterion function is a sufficient approximation of the criterion function, through using a Newton-based algorithm, the proposed optimization is simply solved without being repeated the searching procedure with different starting points to locate the best local minimum. The proposed method is evaluated on thirteen data sets with three Gaussian-kernel-based learning algorithms. The experimental results show that the criterion function has the determined global minimum point for the all thirteen data sets, the proposed method achieves the best high time efficiency performance and the best overall classification performance.