Analysis of neural networks with redundancy
Neural Computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Introduction to artificial neural systems
Introduction to artificial neural systems
Neurofuzzy adaptive modelling and control
Neurofuzzy adaptive modelling and control
Learning and generalization in neural networks
Learning and generalization in neural networks
On different facets of regularization theory
Neural Computation
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
A homomorphic neural network for modeling and prediction
Neural Computation
Nonlinear spline adaptive filtering
Signal Processing
Hi-index | 0.00 |
The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.