Generalization by weight-elimination with application to forecasting
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Neural Computation
A practical Bayesian framework for backpropagation networks
Neural Computation
Geometrical interpretation and architecture selection of MLP
IEEE Transactions on Neural Networks
Brainy: effective selection of data structures
Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation
Hi-index | 0.00 |
A geometrical interpretation of the multilayer perceptron (MLP) is suggested in this paper. Basically, the hidden neurons are considered as the building-blocks for constructing the function with the corresponding weights and biases determining their geometrical shapes and positions. A guideline for architecture selection of MLP is then proposed based upon this interpretation and various prevalent approaches of dealing with the over-fitting problem are also reviewed from this new geometrical interpretation. In particular, the popular regularization methods are studied in detail. Not only the reason why regularization methods are effective to alleviate the over-fitting can be simply explained by the geometrical interpretation, but also a potential problem with regularization is predicted and verified.