Estimates of Data Complexity in Neural-Network Learning

  • Authors:
  • Věra Kůrková

  • Affiliations:
  • Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod Vodárenskou věží 2, Prague 8, Czech Republic

  • Venue:
  • SOFSEM '07 Proceedings of the 33rd conference on Current Trends in Theory and Practice of Computer Science
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Complexity of data with respect to a particular class of neural networks is studied. Data complexity is measured by the magnitude of a certain norm of either the regression function induced by a probability measure describing the data or a function interpolating a sample of input/output pairs of training data chosen with respect to this probability. The norm is tailored to a type of computational units in the network class. It is shown that for data for which this norm is "small", convergence of infima of error functionals over networks with increasing number of hidden units to the global minima is relatively fast. Thus for such data, networks with a reasonable model complexity can achieve good performance during learning. For perceptron networks, the relationship between data complexity, data dimensionality and smoothness is investigated.