Learning automata: an introduction
Learning automata: an introduction
Multilayer feedforward networks are universal approximators
Neural Networks
Connectionist learning procedures
Artificial Intelligence
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Application of adaptive control to the fluctuation of engine speed at idle
Information Sciences: an International Journal
Variable step search algorithm for feedforward networks
Neurocomputing
Local coupled feedforward neural network
Neural Networks
Alopex-based evolutionary algorithm and its application to reaction kinetic parameter estimation
Computers and Industrial Engineering
ICSI'10 Proceedings of the First international conference on Advances in Swarm Intelligence - Volume Part I
Some empirical tests of an interactive activation model of eye movement control in reading
Cognitive Systems Research
Function optimisation by learning automata
Information Sciences: an International Journal
Hi-index | 0.00 |
We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feedforward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a “temperature” parameter in a manner similar to that in simulated annealing. A heuristic “annealing schedule” is presented that is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feedforward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.