What size net gives valid generalization?
Neural Computation
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Neural Computation
Neural network exploration using optimal experiment design
Neural Networks
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Active Learning Using a Constructive Neural Network Algorithm
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part II
Neural network architecture selection: can function complexity help?
Neural Processing Letters
Designing Model Based Classifiers by Emphasizing Soft Targets
Fundamenta Informaticae - Advances in Artificial Intelligence and Applications
Study of neural networks for electric power load forecasting
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
Neural network architecture selection: size depends on function complexity
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Extension of the generalization complexity measure to real valued input data sets
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
Role of function complexity and network size in the generalization ability of feedforward networks
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Designing Model Based Classifiers by Emphasizing Soft Targets
Fundamenta Informaticae - Advances in Artificial Intelligence and Applications
Hi-index | 0.00 |
In this work, we study how the selection of examples affects the learning procedure in a boolean neural network and its relationship with the complexity of the function under study and its architecture. We analyze the generalization capacity for different target functions with particular architectures through an analytical calculation of the minimum number of examples needed to obtain full generalization (i.e., zero generalization error). The analysis of the training sets associated with such parameter leads us to propose a general architecture-independent criterion for selection of training examples. The criterion was checked through numerical simulations for various particular target functions with particular architectures, as well as for random target functions in a nonoverlapping receptive field perceptron. In all cases, the selection sampling criterion lead to an improvement in the generalization capacity compared with a pure random sampling. We also show that for the parity problem, one of the most used problems for testing learning algorithms, only the use of the whole set of examples ensures global learning in a depth two architecture. We show that this difficulty can be overcome by considering a tree-structured network of depth 2 log2(N) - 1.