Letters: Training sparse MS-SVR with an expectation-maximization algorithm

  • Authors:
  • D. N. Zheng;J. X. Wang;Y. N. Zhao

  • Affiliations:
  • State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China;State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China;State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

The solution of multi-scale support vector regression (MS-SVR) with the quadratic loss function can be obtained by solving a time-consuming quadratic programming (QP) problem and a post-processing. This paper adapts an expectation-maximization (EM) algorithm based on two 2-level hierarchical-Bayes models, which implement the l"1-norm and the l"0-norm regularization term asymptotically, to fast train MS-SVR. Experimental results illuminate that the EM algorithm is faster than the QP algorithm for large data sets, the l"0-norm regularization term promotes a far sparser solution than the l"1-norm, and the good performance of MS-SVR should be attributed to the multi-scale kernels and the regularization terms.