Artificial Neural Networks: Approximation and Learning Theory
Artificial Neural Networks: Approximation and Learning Theory
Interpreting Neural Networks in the Frame of the Logic of Lukasiewicz
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
IEEE Transactions on Neural Networks
Training data development with the D-optimality criterion
IEEE Transactions on Neural Networks
Double robustness analysis for determining optimal feedforward neural network architecture
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Hi-index | 0.00 |
A local robustness approach for the selection of the architecture in multilayered feedforward artificial neural networks (MLFANN) is studied in terms of probability density function (PDF) in this work. The method is used in a non-linear autoregressive (NAR) model with innovative outliers. The procedure is proposed for the selection of the locally most robust (around a particular sample) MLFANN architecture candidate for exact learning of a finite set of the real sample. The proposed selection method is based on the output PDF of the MLFANN. As each MLFANN architecture leads to a specific output PDF when its input is a distribution with heavy tails, a distance between probability densities is used as a measure of local robustness. A Monte Carlo study is presented to illustrate the selection method.