Communications of the ACM
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Neural Networks
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Maximum likelihood competitive learning
Advances in neural information processing systems 2
Soft competitive adaptation: neural network learning algorithms based on fitting statistical mixtures
Elements of information theory
Elements of information theory
Practical Issues in Temporal Difference Learning
Machine Learning
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Probability density estimation and local basis function neural networks
Proceedings of the workshop on Computational learning theory and natural learning systems (vol. 2) : intersections between theory and experiment: intersections between theory and experiment
Knowledge-based artificial neural networks
Artificial Intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Some Solutions to the Missing Feature Problem in Vision
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Extracting DNF Rules from Artificial Neural Networks
IWANN '96 Proceedings of the International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation
Network Structuring and Training Using Rule-Based Knowledge
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Hierarchical Mixtures of Experts and the EM Algorithm
Hierarchical Mixtures of Experts and the EM Algorithm
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Hybrid systems of local basis functions
Intelligent Data Analysis
Training without data: knowledge insertion into RBF neural networks
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Building RBF neural network topology through potential functions
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Hi-index | 0.00 |
There is great interest in understanding the intrinsic knowledgeneural networks have acquired during training. Most work in thisdirection is focussed on the multi-layer perceptron architecture. Thetopic of this paper is networks of Gaussian basis functions which areused extensively as learning systems in neural computation. We showthat networks of Gaussian basis functions can be generated fromsimple probabilistic rules. Also, if appropriate learning rules areused, probabilistic rules can be extracted from trained networks. Wepresent methods for the reduction of network complexity with the goalof obtaining concise and meaningful rules. We show how priorknowledge can be refined or supplemented using data by employingeither a Bayesian approach, by a weighted combination of knowledgebases, or by generating artificial training data representing theprior knowledge. We validate our approach using a standardstatistical data set.