Fast training MLP networks with Lo-Shu data sampling

  • Authors:
  • Hung Han Chen

  • Affiliations:
  • Graphion, Jacksonville, FL

  • Venue:
  • AIKED'09 Proceedings of the 8th WSEAS international conference on Artificial intelligence, knowledge engineering and data bases
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Slow speed of training is always one of major concerns for MLP (Multi-Layer Perceptrons) networks. One of the remedies to this issue have been around by solving linear equations through the weights of hidden and output layers. By doing so, however, this solution may limit the usage of large networks and possibly the data with large number of input features. With new studies that show large MLP networks can train better and often generalize better, the need to develop a new method for fast training remains. As data sampling has been used in statistics to speed up the modeling process, this paper presents a novel sampling technique, by introducing an ancient numeric concept, Lo-Shu, to help training MLP networks potentially 3 times faster to generate acceptable models and still preserve the possibility of utilizing very large networks.