Communications of the ACM
Learning from good and bad data
Learning from good and bad data
Constant depth circuits, Fourier transform, and learnability
Journal of the ACM (JACM)
Uniform-distribution attribute noise learnability
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Uniform-distribution attribute noise learnability
Information and Computation
Hi-index | 5.23 |
We study a procedure for estimating an upper bound of an unknown noise factor in the frequency domain. A learning algorithm using a Fourier transformation method was originally given by Linial, Mansour and Nisan. While Linial, Mansour and Nisan assumed that the learning algorithm estimates Fourier coefficients from noiseless data, Bshouty, Jackson, and Tamon, and also Ohtsuki and Tomita extended the algorithm to ones that are robust for noisy data. The noise process that we consider is as follows: for an example , where x@?{0,1}^n,f(x)@?{-1,1}, each bit of xandf(x) gets flipped independently with probability @h during a learning process. The previous learning algorithms for noisy data all assume that the noise factor @h or an upper bound of @h is known in advance. The learning algorithm proposed in this paper works without this assumption. We estimate an upper bound of the noise factor by evaluating a noisy power spectrum in the frequency domain and by using a sampling trick. Combining this procedure with Ohtsuki and Tomita's algorithm, we obtain a quasi-polynomial-time learning algorithm that can cope with noise without knowing any information about the noise in advance.