Unifying metric approach to the triple parity
Artificial Intelligence
IEEE Transactions on Neural Networks
Neural network architecture selection: can function complexity help?
Neural Processing Letters
IWANN'03 Proceedings of the Artificial and natural neural networks 7th international conference on Computational methods in neural modeling - Volume 1
Neural network architecture selection: size depends on function complexity
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Extension of the generalization complexity measure to real valued input data sets
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
Role of function complexity and network size in the generalization ability of feedforward networks
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Hi-index | 0.00 |
The parity function is one of the most used Boolean function for testing learning algorithms because both of its simple definition and its great complexity. We construct a family of modular architectures that implement the parity function in which, every member of the family can be characterized by the fan-in max of the network, i.e., the maximum number of connections that a neuron can receive. We analyze the generalization ability of the modular networks first by computing analytically the minimum number of examples needed for perfect generalization and then by numerical simulations. Both results show that the generalization ability of these networks is systematically improved by the degree of modularity of the network. We also analyze the influence of the selection of examples in the emergence of generalization ability, by comparing the learning curves obtained through a random selection of examples to those obtained through examples selected accordingly to a general algorithm we (2000) recently proposed