Randomness in generalization ability: a source to improve it

  • Authors:
  • D. Sarkar

  • Affiliations:
  • Dept. of Math. & Comput. Sci., Miami Univ., Coral Gables, FL

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANNs) are most popular, because of their simplicity and effectiveness. Difficulties such as long learning time and local minima may not affect FFANNs as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. This paper reports our observations about randomness in generalization ability of FFANNs. A novel method for measuring generalization ability is defined. This method can be used to identify degree of randomness in generalization ability of learning systems. If an FFANN architecture shows randomness in generalization ability for a given problem, multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks in a voting network is increased so does its generalization ability. Further analysis has shown that VC-dimension of the voting network model may increase monotonically as the number of networks in the voting networks is increased