Towards multi-layer perceptron as an evaluator through randomly generated training patterns

  • Authors:
  • Janis Zuters

  • Affiliations:
  • Department of Computer Science, University of Latvia, Riga, Latvia

  • Venue:
  • AIKED'06 Proceedings of the 5th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

Multi-layer perceptron (MLP) is widely used, because many problems can be reduced to approximation of functions. Pattern evaluation, which is discussed in this article, belongs to this range of problems. MLP manages function approximation problems quit well, however an important prerequisite is a uniformly distributed set of training patterns. Unfortunately, such a set is not always available. In this article, the use of randomly generated additional training patterns is examined to see whether this improves the training result in cases, when just "positive" patterns are available.