Some greedy learning algorithms for sparse regression and classification with mercer kernels
The Journal of Machine Learning Research
Hi-index | 0.01 |
An Occam approximation is an algorithm that takes as input a set of samples of a function and a tolerance $\epsilon$ and produces as output a compact representation of a function that is within $\epsilon$ of the given samples. We show that the existence of an Occam approximation is sufficient to guarantee the probably approximate learnability of classes of functions on the reals even in the presence of arbitrarily large but random additive noise. One consequence of our results is a general technique for the design and analysis of nonlinear filters in digital signal processing.