The nature of statistical learning theory
The nature of statistical learning theory
An equivalence between sparse approximation and support vector machines
Neural Computation
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
On the Optimality of the Backward Greedy Algorithm for the Subset Selection Problem
SIAM Journal on Matrix Analysis and Applications
SMO algorithm for least-squares SVM formulations
Neural Computation
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Some greedy learning algorithms for sparse regression and classification with mercer kernels
The Journal of Machine Learning Research
A fast kernel-based nonlinear discriminant analysis for multi-class problems
Pattern Recognition
Kernel conjugate gradient for fast kernel machines
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Sparse approximation through boosting for learning large scale kernel machines
IEEE Transactions on Neural Networks
Multikernel semiparametric linear programming support vector regression
Expert Systems with Applications: An International Journal
The kernel recursive least-squares algorithm
IEEE Transactions on Signal Processing
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Pruning error minimization in least squares support vector machines
IEEE Transactions on Neural Networks
Fast Sparse Approximation for Least Squares Support Vector Machine
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Although kernel minimum squared error (KMSE) is computationally simple, i.e., it only needs solving a linear equation set, it suffers from the drawback that in the testing phase the computational efficiency decreases seriously as the training samples increase. The underlying reason is that the solution of Naive KMSE is represented by all the training samples in the feature space. Hence, in this paper, a method of selecting significant nodes for KMSE is proposed. During each calculation round, the presented algorithm prunes the training sample making least contribution to the objective function, hence called as PLOC-KMSE. To accelerate the training procedure, a batch of so-called nonsignificant nodes is pruned instead of one by one in PLOC-KMSE, and this speedup algorithm is named MPLOC-KMSE for short. To show the efficacy and feasibility of the proposed PLOC-KMSE and MPLOC-KMSE, the experiments on benchmark data sets and real-world instances are reported. The experimental results demonstrate that PLOC-KMSE and MPLOC-KMSE require the fewest significant nodes compared with other algorithms. That is to say, their computational efficiency in the testing phase is best, thus suitable for environments having a strict demand of computational efficiency. In addition, from the performed experiments, it is easily known that the proposed MPLOC-KMSE accelerates the training procedure without sacrificing the computational efficiency of testing phase to reach the almost same generalization performance. Finally, although PLOC and MPLOC are proposed in regression domain, they can be easily extended to classification problem and other algorithms such as kernel ridge regression.