Towards more practical average bounds on supervised learning

  • Authors:
  • H. Gu;H. Takahashi

  • Affiliations:
  • Dept. of Commun. & Syst. Eng., Univ. of Electro-Commun., Chofu;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we describe a method which enables us to study the average generalization performance of learning directly via hypothesis testing inequalities. The resulting theory provides a unified viewpoint of average-case learning curves of concept learning and regression in realistic learning problems not necessarily within the Bayesian framework. The advantages of the theory are that it alleviates the practical pessimism frequently claimed for the results of the Vapnik-Chervonenkis (VC) theory and its alike, and provides general insights into generalization. Besides, the bounds on learning curves are directly related to the number of adjustable system weights. Although the theory is based on an approximation assumption, and cannot apply to the worst-case learning setting, the precondition of the assumption is mild, and the approximation itself is only a sufficient condition for the validity of the theory. We illustrate the results with numerical simulations, and apply the theory to examining the generalization ability of combination of neural networks