Communications of the ACM
Nonlinear approximation theory
Nonlinear approximation theory
What size net gives valid generalization?
Neural Computation
Computational learning theory: an introduction
Computational learning theory: an introduction
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Neural networks with quadratic VC dimension
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Improving the mean field approximation via the use of mixture distributions
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
Learning Fine Motion by Markov Mixtures of Experts
Learning Fine Motion by Markov Mixtures of Experts
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Vapnik-chervonenkis generalization bounds for real valued neural networks
Neural Computation
Hi-index | 0.00 |
The mixtures-of-experts (ME) methodology provides a tool of classification when experts of logistic regression models or Bernoulli models are mixed according to a set of local weights. We show that the Vapnik-Chervonenkis dimension of the ME architecture is bounded below by the number of experts m and above by O (m4s2), where s is the dimension of the input. For mixtures of Bernoulli experts with a scalar input, we show that the lower bound m is attained, in which case we obtain the exact result that the VC dimension is equal to the number of experts.