Accelerated learning in layered neural networks
Complex Systems
The “moving targets” training algorithm
Advances in neural information processing systems 2
A cost function for internal representations
Advances in neural information processing systems 2
Sigmoids distinguish more efficiently than heavisides
Neural Computation
Using random weights to train multilayer networks of hard-limiting units
IEEE Transactions on Neural Networks
Hi-index | 0.98 |
Artificial neural networks have, in recent years, been very successfully applied in a wide range of areas. A major reason for this success has been the existence of a training algorithm called backpropagation. This algorithm relies upon the neural units in a network having input/output characteristics that are continuously differentiable. Such units are significantly less easy to implement in silicon than are neural units with Heaviside (step-function) characteristics. In this paper, we show how a training algorithm similar to backpropagation can be developed for 2-layer networks of Heaviside units by treating the network weights (i.e., interconnection strengths) as random variables. This is then used as a basis for the development of a training algorithm for networks with any number of layers by drawing upon the idea of internal representations. Some examples are given to illustrate the performance of these learning algorithms.