Increased Rates of Convergence Through Learning Rate Adaptation

  • Authors:
  • Robert A. Jacobs

  • Affiliations:
  • -

  • Venue:
  • Increased Rates of Convergence Through Learning Rate Adaptation
  • Year:
  • 1987

Quantified Score

Hi-index 0.00

Visualization

Abstract

WHILE THERE EXIST MANY TECHNIQUES FOR FINDING THE PARAMETERS THAT MINI- MIZE AN ERROR FUNCTION, ONLY THOSE METHODS THAT SOLELY PERFORM LOCAL COMPU- TATIONS ARE USED IN CONNECTIONIST NETWORKS. THE MOST POPULAR LEARNING ALGO RITHM FOR CONNECTIONIST NETWORKS IS THE BACK-PROPOGATION PROCEDURE [13], WHICH CAN BE USED TO UPDATE THE WEIGHTS BY THE METHOD OF STEEPEST DESCENT. IN THIS PAPER, WE EXAMINE STEEPEST DESCENT AND ANALYZE WHY IT CAN BE SLOW TO CONVERGE. WE THEN PROPOSE FOUR HEURISTICS FOR ACHIEVING FASTER RATES OF CONVERGENCE WHILE ADHERING TO THE LOCALITY CONSTRAINT. THESE HEURISTICS SUGGEST THAT EVERY WEIGHT OF A NETWORK SHOULD BE GIVEN ITS OWN LEARNING RATE AND THAT THESE RATES SHOULD BE ALLOWED TO VARY OVER TIME. ADDITIONALLY THE HEURISTICS SUGGEST HOW THE LEARNING RATES SHOULD BE ADJUSTED. TWO IMPLEMENTATIONS OF THESE HEURISTICS, NAMELY MOMENTUM AND AN ALGORITHM CALLED THE DELTA-BAR-DELTA RULE, ARE STUDIED AND SIMULATION RESULTS ARE PRESENTED.