Modular construction of time-delay neural networks for speech recognition
Neural Computation
Encoding a priori information in feedforward networks
Neural Networks
Knowledge-based artificial neural networks
Artificial Intelligence
Prior knowledge in support vector kernels
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Bayesian approach for neural networks—review and case studies
Neural Networks
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Explanation-Based Neural Network Learning: A Lifelong Learning Approach
Explanation-Based Neural Network Learning: A Lifelong Learning Approach
Machine Learning
Rule Revision With Recurrent Neural Networks
IEEE Transactions on Knowledge and Data Engineering
A Method for Learning From Hints
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Time-series forecasting using flexible neural tree model
Information Sciences: an International Journal
Invariance priors for Bayesian feed-forward neural networks
Neural Networks
Information Sciences: an International Journal
Mindful: A framework for Meta-INDuctive neuro-FUzzy Learning
Information Sciences: an International Journal
A hybrid learning algorithm for a class of interval type-2 fuzzy neural networks
Information Sciences: an International Journal
Connectionist theory refinement: genetically searching the space of network topologies
Journal of Artificial Intelligence Research
Genetically optimized fuzzy polynomial neural networks with fuzzy set-based polynomial neurons
Information Sciences: an International Journal
IEEE Transactions on Neural Networks
A constructive approach for finding arbitrary roots of polynomials by neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Information Sciences: an International Journal
Information Sciences: an International Journal
Hi-index | 0.07 |
In an attempt to enhance the neural network technique so that it can evolve from a ''black box'' tool into a semi-analytical one, we propose a novel modeling approach of imposing ''generalized constraints'' on a standard neural network. We redefine approximation problems by use of a new formalization with the aim of embedding prior knowledge explicitly into the model to the maximum extent. A generalized-constraint neural network (GCNN) model has therefore been developed, which basically consists of two submodels. One is constructed by the standard neural network technique to approximate the unknown part of the target function. The other is formed from partially known relationships to impose generalized constraints on the whole model. Three issues arising after combination of the two submodels are discussed: (a) the better approximation provided by the GCNN model compared with a standard neural network, (b) the identifiability of parameters in the partially known relationships, and (c) the discrepancy in the approximation due to removable singularities in the target function. Numerical studies of three benchmark problems show important findings that have not previously been reported in the literature. Significant benefits were observed from using the GCNN model in comparison with a standard neural network.