Autonomous design of artificial neural networks by neurex
Neural Computation
Backpropagation learning with a (1+1) ES
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Hi-index | 0.00 |
WHILE THERE EXIST MANY TECHNIQUES FOR FINDING THE PARAMETERS THAT MINI- MIZE AN ERROR FUNCTION, ONLY THOSE METHODS THAT SOLELY PERFORM LOCAL COMPU- TATIONS ARE USED IN CONNECTIONIST NETWORKS. THE MOST POPULAR LEARNING ALGO RITHM FOR CONNECTIONIST NETWORKS IS THE BACK-PROPOGATION PROCEDURE [13], WHICH CAN BE USED TO UPDATE THE WEIGHTS BY THE METHOD OF STEEPEST DESCENT. IN THIS PAPER, WE EXAMINE STEEPEST DESCENT AND ANALYZE WHY IT CAN BE SLOW TO CONVERGE. WE THEN PROPOSE FOUR HEURISTICS FOR ACHIEVING FASTER RATES OF CONVERGENCE WHILE ADHERING TO THE LOCALITY CONSTRAINT. THESE HEURISTICS SUGGEST THAT EVERY WEIGHT OF A NETWORK SHOULD BE GIVEN ITS OWN LEARNING RATE AND THAT THESE RATES SHOULD BE ALLOWED TO VARY OVER TIME. ADDITIONALLY THE HEURISTICS SUGGEST HOW THE LEARNING RATES SHOULD BE ADJUSTED. TWO IMPLEMENTATIONS OF THESE HEURISTICS, NAMELY MOMENTUM AND AN ALGORITHM CALLED THE DELTA-BAR-DELTA RULE, ARE STUDIED AND SIMULATION RESULTS ARE PRESENTED.