The nature of statistical learning theory
The nature of statistical learning theory
Reducing the run-time complexity in support vector machines
Advances in kernel methods
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Convergence of the IRWLS Procedure to the Support Vector Machine Solution
Neural Computation
Multiclass reduced-set support vector machines
ICML '06 Proceedings of the 23rd international conference on Machine learning
Letters: Compact multi-class support vector machine
Neurocomputing
IEEE Transactions on Neural Networks
The pre-image problem in kernel methods
IEEE Transactions on Neural Networks
Fuzzy ARTMAP and hybrid evolutionary programming for pattern classification
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology - Evolutionary neural networks for practical applications
Hi-index | 0.01 |
Support vector machines (SVMs) have become an off-the-shelf solution to solve many machine learning tasks but, unfortunately, the size of the resulting machines is quite often exceedingly large, which hampers their use in those practical applications demanding extremely fast response. Some methods exist to prune the models after training, but a full SVM model needs to be trained first, which usually represents a large computational cost. Furthermore, the reduction algorithms are prone to fall in local minima and also represent an additional non-negligible computational cost. Alternative procedures based on incrementally growing a semiparametric model provide a good compromise between complexity, machine size and performance. We investigate here the potential benefits of a fast error estimation (FEE) mechanism to improve the semiparametric SVM growing process. Precisely, we propose to use the FEE method to identify the best node to be added to the model in every growing step, by selecting the candidate with the lowest cross-validation error. We evaluate the proposed approach by evaluating the performance of the algorithm in benchmarks with real world datasets from the UCI machine learning repository.