Approximation and learning of convex superpositions
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Feedforward Neural Network Methodology
Feedforward Neural Network Methodology
The covering number in learning theory
Journal of Complexity
Some Local Measures of Complexity of Convex Hulls and Generalization Bounds
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
On the size of convex hulls of small sets
The Journal of Machine Learning Research
Entropy of convex hulls: some Lorentz norm results
Journal of Approximation Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Comparison of worst case errors in linear and neural network approximation
IEEE Transactions on Information Theory
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Geometric rates of approximation by neural networks
SOFSEM'08 Proceedings of the 34th conference on Current trends in theory and practice of computer science
Hi-index | 0.04 |
Covering numbers of precompact symmetric convex subsets of Hilbert spaces are investigated. Lower bounds are derived for sets containing orthogonal subsets with norms of their elements converging to zero sufficiently slowly. When these sets are convex hulls of sets with power-type covering numbers, the bounds are tight. The arguments exploit properties of generalized Hadamard matrices. The results are illustrated by examples from machine learning, neurocomputing, and nonlinear approximation.