Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Orthogonal series density estimation and the kernel eigenvalue problem
Neural Computation
Support Vector Machine Regression for Volatile Stock Market Prediction
IDEAL '02 Proceedings of the Third International Conference on Intelligent Data Engineering and Automated Learning
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Invariance of neighborhood relation under input space to feature space mapping
Pattern Recognition Letters
A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs
The Journal of Machine Learning Research
Hi-index | 0.00 |
In general, the similar input data have the similar output target values. A novel Fast Support Vector Regression (FSVR) is proposed on the reduced training set. Firstly, the improved learning machine divides the training data into blocks by using the traditional clustering methods, such as K-mean and FCM clustering techniques. Secondly, the membership function on each block is defined by the corresponding target values of the training data, all the training data have the membership degree falling into the interval [0, 1], which can vary the penalty coefficient by multiplying C. Thirdly, the reduced training set is used to training FSVR, which consists of the data with the membership degrees, which are greater than or equal to the selected suitable parameter ? . The experimental results on the traditional machine learning data sets show that the FSVR can not only achieve the better or acceptable performance but also downsize the number of training data and speed up training.