Three learning phases for radial-basis-function networks
Neural Networks
Exact simplification of support vector solutions
The Journal of Machine Learning Research
A Direct Method for Building Sparse Kernel Learning Algorithms
The Journal of Machine Learning Research
Improving accuracy of LVQ algorithm by instance weighting
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
A study on reduced support vector machines
IEEE Transactions on Neural Networks
The pre-image problem in kernel methods
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Reduced Set SVMs (RS-SVM) are a group of methods that simplify the internal structure of SVM models, while keeping the SVMs' decision boundaries as similar as possible to the original ones. RS-SVMs are very useful in reducing computational complexity of the original models. They accelerate the decision process by reducing the number of support vectors. They are especially important for large datasets, when lots of support vectors are selected. They also can be very useful for understanding the internal structure of SVM models by the use of prototype-based rules. This paper presents a new method based on the modified version of the LVQ algorithm called WLVQ, which combines both of the objectives: computational complexity reduction and generation of prototypebased rules.