Constructing hidden units using examples and queries
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Neural Networks
Global Optimization for Neural Network Training
Computer - Special issue: neural computing: companion issue to Spring 1996 IEEE Computational Science & Engineering
Fast Convergent Generalized Back-Propagation Algorithm with Constant Learning Rate
Neural Processing Letters
Hi-index | 0.00 |
All existing architectures and learning algorithms for Generalized Congruence Neural Network (GCNN) seem to have some shortages or lack rigorous theoretical foundation. In this paper, a novel GCNN architecture (BPGCNN) is proposed. A new error back-propagation learning algorithm is also developed for the BPGCNN. Experimental results on some benchmark problems show that the proposed BPGCNN performs better than standard sigmoidal BPNN and some improved versions of BPNN in convergence speed and learning capability, and can overcome the drawbacks of other existing GCNNs.