Communications of the ACM
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
Artificial Intelligence
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
How tight are the Vapnik-Chervonenkis bounds?
Neural Computation
Neural Computation
A universal theorem on learning curves
Neural Networks
Finiteness results for sigmoidal “neural” networks
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Estimating learning curves of concept learning
Neural Networks
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Exponential or Polynomial Learning Curves? Case-Based Studies
Neural Computation
Towards more practical average bounds on supervised learning
IEEE Transactions on Neural Networks
A tight bound on concept learning
IEEE Transactions on Neural Networks
Results in statistical discriminant analysis: a review of the former Soviet union literature
Journal of Multivariate Analysis
An estimation framework of a user learning curve on web-based interface using eye tracking equipment
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: human-centred design approaches, methods, tools, and environments - Volume Part I
Hi-index | 0.15 |
In this paper, we motivate the need for estimating bounds on learning curves of average-case learning algorithms when they perform the worst on training samples. We then apply the method of reducing learning problems to hypothesis testing ones to investigate the learning curves of a so-called ill-disposed learning algorithm in terms of a system complexity, the Boolean interpolation dimension. Since the ill-disposed algorithm behaves worse than ordinal ones, and the Boolean interpolation dimension is generally bounded by the number of system weights, the results can apply to interpreting or to bounding the worst-case learning curve in real learning situations. This study leads to a new understanding of the worst-case generalization in real learning situations, which differs significantly from that in the uniform learnable setting via Vapnik-Chervonenkis (VC) dimension analysis. We illustrate the results with some numerical simulations.