Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Solving the N-bit parity problem using neural networks
Neural Networks
An Introduction to Formal Languages and Automata
An Introduction to Formal Languages and Automata
Handbook of Formal Languages
Generalization and Selection of Examples in Feedforward Neural Networks
Neural Computation
Generalization properties of modular networks: implementing the parity function
IEEE Transactions on Neural Networks
Pruning recurrent neural networks for improved generalization performance
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This work discusses a new highly nonlinear data set that can be used to train and test neural networks. The set is based on the parity concept such that the classifier should learn the target class via checking the parity of more than one symbol in the data set. The classical N-bit parity works by checking the parity of one symbol therefore the classifier works as a dichotomizer no matter how large N is. With the hybrid N-parity data set, the classifier's job is much more complex and requires the classification of data into four categories or more. Experimental results show that multi-layer feedforward neural networks or simple recurrent neural networks trained with gradient descent via different learning algorithms can learn the seen pattern samples of the hybrid N-parity. These networks, although, cannot generalize to unseen patterns, and in such a case a generalization failure occurs. Without training or adaptation, a hard-wired neural network has been given that generalizes to all seen and unseen patterns. To train neural networks with full generalization, the hybrid N-parity is a challenging data set to work with.