Making large-scale support vector machine learning practical
Advances in kernel methods
Handling concept drifts in incremental learning with support vector machines
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
The Bayesian Committee Support Vector Machine
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
IEEE Transactions on Neural Networks
A Confident Majority Voting Strategy for Parallel and Modular Support Vector Machines
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part III
Parallel approaches to machine learning-A comprehensive survey
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
In order to handle large-scale pattern classification problems, various sequential and parallel classification methods have been developed according to the divide-and-conquer principle. However, existing sequential methods need long training time, and some of parallel methods lead to generalization accuracy decreasing and the number of support vectors increasing. In this paper, we propose a novel hierarchical and parallel method for training support vector machines. The simulation results indicate that our method can not only speed up training but also reduce the number of support vectors while maintaining the generalization accuracy.