Learning with a slowly changing distribution
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Simulating access to hidden information while learning
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
PAC-like upper bounds for the sample complexity of leave-one-out cross-validation
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Hi-index | 0.00 |
We consider the problem of predicting {0,1}-valued functions on R^n and smaller domains, based on their values on randomly drawn points. Our model is related to Valiant''s PAC learning model, but does not require the hypotheses used for prediction to be represented in any specified form. In our main result we show how to construct prediction strategies that are optimal to within a constant factor for any reasonable class F of target functions. This result is based on new combinatorial results about classes of functions of finite VC dimension. We also discuss more computationally efficient algorithms for predicting indicator functions of axis-parallel rectangles, more general intersection closed concept classes, and halfspaces in R^n . These are also optimal to within a constant factor. Finally, we compare the general performance of prediction strategies derived by our method to those derived from methods in PAC learning theory.