Artificial Intelligence Review - Special issue on lazy learning
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Technical Note: Naive Bayes for Regression
Machine Learning
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Learning locally weighted C4.4 for class probability estimation
DS'07 Proceedings of the 10th international conference on Discovery science
Hi-index | 0.00 |
Shevade et al.[1] are successful in extending some improved ideas to Smola and Scholkopf's SMO algorithm[2] for solving regression problems, simply named SMOreg. In this paper, we use SMOreg in exactly the same way as linear regression(LR) is used in locally weighted linear regression[5](LWLR): a local SMOreg is fit to a subset of the training instances that is in the neighborhood of the test instance whose target function value is to be predicted. The training instances in this neighborhood are weighted, with less weight being assigned to instances that are further from the test instance. A regression prediction is then obtained from SMOreg taking the attribute values of the test instance as input. We called our improved algorithm locally weighted SMOreg, simply LWSMOreg. We conduct extensive empirical comparison for the related algorithms in two groups in terms of relative mean absolute error, using the whole 36 regression data sets obtained from various sources and recommended by Weka[3]. In the first group, we compare SMOreg[1] with NB[4](naive Bayes), KNNDW[5](k-nearest-neighbor with distance weighting), and LR. In the second group, we compare LWSMOreg with SMOreg, LR, and LWLR. Our experimental results show that SMOreg performs well in regression and LWSMOreg significantly outperforms all the other algorithms used to compare.