Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
The importance of convexity in learning with squared loss
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We study the sample complexity of proper and improper learning problems with respect to different Lq loss functions. We improve the known estimates for classes which have relatively small covering numbers (log-covering numbers which are polynomial with exponent p Lq norm for q ≥ 2.