Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Back propagation separates where perceptrons do
Neural Networks
On the Problem of Local Minima in Backpropagation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Implementation and tests of low-discrepancy sequences
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Terminal Repeller Unconstrained Subenergy Tunneling (TRUST) for fast global optimization
Journal of Optimization Theory and Applications
Minimisation methods for training feedforward neural networks
Neural Networks
On the geometry of feedforward neural network error surfaces
Neural Computation
The error surface of the simplest xor network has only global minima
Neural Computation
Learning in linear neural networks: a survey
IEEE Transactions on Neural Networks
On the local minima free condition of backpropagation learning
IEEE Transactions on Neural Networks
Calibrating artificial neural networks by global optimization
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Backpropagation, (BP), is one of the most frequently used practical methods for supervised training of artificial neural networks. During the learning process, BP may get stuck in local minima, producing suboptimal solution, and thus limiting the effectiveness of the training. This work is dedicated to the problem of avoiding local minima and introduces a new technique for learning, which substitutes gradient descent algorithm in the BP with an optimization method for a global search in a multi-dimensional parameter (weight) space. For this purpose, a low-discrepancy LPΤ, sequence is used. The proposed method is discussed and tested with common benchmark problems at the end.