Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Monitoring MLP's free parameters for generalization
AIKED'09 Proceedings of the 8th WSEAS international conference on Artificial intelligence, knowledge engineering and data bases
Lessons in neural network training: overfitting may be harder than expected
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Investigating better multi-layer perceptrons for the task of classification
WSEAS Transactions on Computers
Hi-index | 0.00 |
Slow speed of training is always one of major concerns for MLP (Multi-Layer Perceptrons) networks. One of the remedies to this issue have been around by solving linear equations through the weights of hidden and output layers. By doing so, however, this solution may limit the usage of large networks and possibly the data with large number of input features. With new studies that show large MLP networks can train better and often generalize better, the need to develop a new method for fast training remains. As data sampling has been used in statistics to speed up the modeling process, this paper presents a novel sampling technique, by introducing an ancient numeric concept, Lo-Shu, to help training MLP networks potentially 3 times faster to generate acceptable models and still preserve the possibility of utilizing very large networks.