Multilayer feedforward networks are universal approximators
Neural Networks
What size net gives valid generalization?
Neural Computation
The appeal of parallel distributed processing
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Creating artificial neural networks that generalize
Neural Networks
Neural networks and the bias/variance dilemma
Neural Computation
Neural Computation
A practical Bayesian framework for backpropagation networks
Neural Computation
Simplifying neural networks by soft weight-sharing
Neural Computation
Neural network constructive algorithms: trading generalization for learning efficiency?
Circuits, Systems, and Signal Processing - Special issue: networks for neural processing
Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension
Machine Learning - Special issue on computational learning theory
Two strategies to avoid overfitting in feedforward networks
Neural Networks
No free lunch for early stopping
Neural Computation
A New Method to Increase the Margin of Multilayer Perceptrons
Neural Processing Letters
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Hi-index | 0.00 |
A novel approach to generalisation is presented that is able, under certain circumstances, to guarantee the generalisation to binary-output data for which no targets have been given. The basis of the guarantee is the recognition of a persistent global minimum error solution. An empirical test for whether the guarantee holds is provided which uses a technique called target reversal. The technique employs two neural networks whose convergence using opposing targets signals validity of the guarantee.