Neural network exploration using optimal experiment design
Neural Networks
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Selecting concise training sets from clean data
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this article we propose a new insight into the field of feed-forward neural network modeling. We considered the framework of the nonlinear regression models to construct computer-aided D-optimal designs for this class of neural models. These designs can be seen as a particular case of active learning. Classical algorithms are used to construct local approximate and local exact D-optimal designs. We observed that the so-called generalization of a neural network (the equivalent term, ''predictive ability'', is more familiar to statisticians) is improved when the D-efficiency of the chosen ''learning set design'' increases. We thus showed that the D-efficiency criterion can be the basis for a better strategy for the neural network learning phase than the standard uniform random strategy encountered in this field. Our proposition is based on two possible strategies: a One-Step Strategy or a Full Sequential Strategy. Intensive Monte Carlo simulations with an academic example show that the D-optimal ''learning set design'' strategies proposed lead to a substantial improvement in the use of neural network models.