Neural Network Learning Using Low-Discrepancy Sequence

  • Authors:
  • Ivan Jordanov;Robert Brown

  • Affiliations:
  • -;-

  • Venue:
  • AI '99 Proceedings of the 12th Australian Joint Conference on Artificial Intelligence: Advanced Topics in Artificial Intelligence
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

Backpropagation, (BP), is one of the most frequently used practical methods for supervised training of artificial neural networks. During the learning process, BP may get stuck in local minima, producing suboptimal solution, and thus limiting the effectiveness of the training. This work is dedicated to the problem of avoiding local minima and introduces a new technique for learning, which substitutes gradient descent algorithm in the BP with an optimization method for a global search in a multi-dimensional parameter (weight) space. For this purpose, a low-discrepancy LPΤ, sequence is used. The proposed method is discussed and tested with common benchmark problems at the end.