Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural Computation
Hi-index | 0.00 |
When training a feedforward neural network with backpropagation (Rumelhart et al. 1986), local minima are always a problem because of the nonlinearity of the system. There have been several ways to attack this problem: for example, to restart the training by selecting a new initial point, to perform the preprocessing of the input data or the neural network. Here, we propose a method which is efficient in computation to avoid some local minima.