A modular single-hidden-layer perceptron for letter recognition

  • Authors:
  • Gao Daqi;Shangming Zhu;Wenbing Gu

  • Affiliations:
  • Department of Computer Science, State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai, China;Department of Computer Science, State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai, China;Department of Computer Science, State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai, China

  • Venue:
  • ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

An n-class problem is decomposed into n two-class problems. Naturally, modular multilayer perceptrons (MLPs) come into being. A single- output MLP is behalf of a class and trained by a two-class learning subset. A training subset only consists of all samples from a special class and a part samples from the nearest classes. If the decision boundary of a single-output MLP is open, its outputs are amended by a correction coefficient. This paper clarifies such a fact that the generalization of a single-output MLP is seriously affected by the sample disequilibrium situation. Therefore, the samples from the little class have to be multiplied an enlarging factor. The result of letter recognition shows that the above methods are effective.