What size net gives valid generalization?
Neural Computation
Training a 3-node neural network is NP-complete
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Mathematical control theory: deterministic systems
Mathematical control theory: deterministic systems
Introduction to the theory of neural computation
Introduction to the theory of neural computation
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
On the computational power of sigmoid versus boolean threshold circuits (extended abstract)
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
On the computational power of neural nets
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
VC dimension and uniform learnability of sparse polynomials and rational functions
SIAM Journal on Computing
Bounds for the computational power and learning complexity of analog neural nets
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Feedforward nets for interpolation and classification
Journal of Computer and System Sciences
Bounds for the computational power and learning complexity of analog neural nets
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Polynomial bounds for VC dimension of sigmoidal neural networks
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
Dense shattering and teaching dimensions for differentiable families (extended abstract)
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Function Estimation by Feedforward Sigmoidal Networks with BoundedWeights
Neural Processing Letters
How Bad May Learning Curves Be?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Training a single sigmoidal neuron is hard
Neural Computation
Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
Almost Linear VC-Dimension Bounds for Piecewise Polynomial Networks
Neural Computation
On approximate learning by multi-layered feedforward circuits
Theoretical Computer Science - Algorithmic learning theory (ALT 2000)
Analog versus discrete neural networks
Neural Computation
Randomized algorithms for robust controller synthesis using statistical learning theory
Automatica (Journal of IFAC)
Hi-index | 0.01 |