Characterization of training errors in supervised learning using gradient-based rules

  • Authors:
  • Jun Wang;B. Malakooti

  • Affiliations:
  • University of North Dakota, USA;Case Western Reserve University, USA

  • Venue:
  • Neural Networks
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the majority of the existing supervised learning paradigms, a neural network is trained by minimizing an error function using a learning rule. The commonly used learning rules are gradient-based learning rules such as the popular backpropagation algorithm. This paper addresses an important issue on error minimization in supervised learning of neural networks using gradient-based learning rules. This paper characterizes asymptotic properties of training errors for various forms of neural networks in supervised learning and discusses their practical implications for designing neural networks via remarks and examples. The analytical results presented in this paper reveal the dependency of quality of supervised learning on the rank of training samples and associated steady activation stales. The analytical results also reveal the complexity of achieving a zero training error.