Using locally weighted learning to improve SMOreg for regression

  • Authors:
  • Chaoqun Li;Liangxiao Jiang

  • Affiliations:
  • Faculty of Mathematics and Physics, China University of Geosciences, Wuhan, Hubei, P.R.China;Faculty of Computer Science, China University of Geosciences, Wuhan, Hubei, P.R.China

  • Venue:
  • PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Shevade et al.[1] are successful in extending some improved ideas to Smola and Scholkopf's SMO algorithm[2] for solving regression problems, simply named SMOreg. In this paper, we use SMOreg in exactly the same way as linear regression(LR) is used in locally weighted linear regression[5](LWLR): a local SMOreg is fit to a subset of the training instances that is in the neighborhood of the test instance whose target function value is to be predicted. The training instances in this neighborhood are weighted, with less weight being assigned to instances that are further from the test instance. A regression prediction is then obtained from SMOreg taking the attribute values of the test instance as input. We called our improved algorithm locally weighted SMOreg, simply LWSMOreg. We conduct extensive empirical comparison for the related algorithms in two groups in terms of relative mean absolute error, using the whole 36 regression data sets obtained from various sources and recommended by Weka[3]. In the first group, we compare SMOreg[1] with NB[4](naive Bayes), KNNDW[5](k-nearest-neighbor with distance weighting), and LR. In the second group, we compare LWSMOreg with SMOreg, LR, and LWLR. Our experimental results show that SMOreg performs well in regression and LWSMOreg significantly outperforms all the other algorithms used to compare.