Non-monotone differential evolution
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Self-scaled conjugate gradient training algorithms
Neurocomputing
Local coupled feedforward neural network
Neural Networks
Dual gradient descent algorithm on two-layered feed-forward artificial neural networks
IEA/AIE'07 Proceedings of the 20th international conference on Industrial, engineering, and other applications of applied intelligent systems
Adaptive self-scaling non-monotone BFGS training algorithm for recurrent neural networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Hi-index | 0.00 |
We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs. To this end, we argue that the current error function value must satisfy a nonmonotone criterion with respect to the maximum error function value of the M previous epochs, and we propose a subprocedure to dynamically compute M. The nonmonotone strategy can be incorporated in any batch training algorithm and provides fast, stable, and reliable learning. Experimental results in different classes of problems show that this approach improves the convergence speed and success percentage of first-order training algorithms and alleviates the need for fine-tuning problem-depended heuristic parameters.