An approximation theory approach to learning with l1 regularization

  • Authors:
  • Hong-Yan Wang;Quan-Wu Xiao;Ding-Xuan Zhou

  • Affiliations:
  • School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou, 310018, China;Microsoft Search Technology Center Asia, Beijing, 100080, China;Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong, China

  • Venue:
  • Journal of Approximation Theory
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Regularization schemes with an @?^1-regularizer often produce sparse representations for objects in approximation theory, image processing, statistics and learning theory. In this paper, we study a kernel-based learning algorithm for regression generated by regularization schemes associated with the @?^1-regularizer. We show that convergence rates of the learning algorithm can be independent of the dimension of the input space of the regression problem when the kernel is smooth enough. This confirms the effectiveness of the learning algorithm. Our error analysis is carried out by means of an approximation theory approach using a local polynomial reproduction formula and the norming set condition.