A stochastic version of the delta rule
CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Advances in neural information processing systems 2
Keeping the neural networks simple by minimizing the description length of the weights
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Neural computing increases robot adaptivity
Natural Computing: an international journal
Sequential Learning in Feedforward Networks: Proactive and Retroactive Interference Minimization
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Neural Learning Invariant to Network Size Changes
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Neural learning methods yielding functional invariance
Theoretical Computer Science
Natural inspiration for artificial adaptivity: some neurocomputing experiences in robotics
UC'05 Proceedings of the 4th international conference on Unconventional Computation
Hi-index | 0.00 |
We show that minimizing the expected error of a feedforward network over a distribution of weights results in an approximation that tends to be independent of network size as the number of hidden units grows. This minimization can be easily performed, and the complexity of the resulting function implemented by the network is regulated by the variance of the weight distribution. For a fixed variance, there is a number of hidden units above which either the implemented function does not change or the change is slight and tends to zero as the size of the network grows. In sum, the control of the complexity depends on only the variance, not the architecture, provided it is large enough.