Optimal learning rates for least squares regularized regression with unbounded sampling

  • Authors:
  • Cheng Wang;Ding-Xuan Zhou

  • Affiliations:
  • College of Mathematical Sciences, Guangxi Normal University, Guilin, Guangxi 541004, PR China;Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong, China

  • Venue:
  • Journal of Complexity
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.