Multilayer feedforward networks are universal approximators
Neural Networks
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
The computational brain
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Comparison of adaptive methods for function estimation from samples
IEEE Transactions on Neural Networks
Statistically controlled activation weight initialization (SCAWI)
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We put forth a new paradigm for neural network training in which the initial weights to the network are set to zero. This is done in conjunction with random learning rate to achieve better results. To validate the work, the means test errors were calculated for the traditional approach and the newly proposed paradigm. These results suggest that this new paradigm can be used as an alternate approach to train the neural networks. This new paradigm gives lesser value for the mean test error for some problems than those generated using the traditional random initial weights initialization approach. These results suggest that this proposed paradigm is equivalent and even at times better than the traditional random initial weights initialization approach.