Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Class Conditional Density Estimation Using Mixtures with Constrained Component Sharing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nomograms for visualization of naive Bayesian classifier
PKDD '04 Proceedings of the 8th European Conference on Principles and Practice of Knowledge Discovery in Databases
Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review
Neural Computation
Explaining Classifications For Individual Instances
IEEE Transactions on Knowledge and Data Engineering
Explaining instance classifications with interactions of subsets of feature values
Data & Knowledge Engineering
Understanding neural networks via rule extraction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
An Efficient Explanation of Individual Classifications using Game Theory
The Journal of Machine Learning Research
Shared kernel models for class conditional density estimation
IEEE Transactions on Neural Networks
An incremental training method for the probabilistic RBF network
IEEE Transactions on Neural Networks
Quality of classification explanations with PRBF
Neurocomputing
Intelligent Data Analysis
Hi-index | 0.00 |
For many important practical applications model transparency is an important requirement. A probabilistic radial basis function (PRBF) network is an effective non-linear classifier, but similarly to most other neural network models it is not straightforward to obtain explanations for its decisions. Recently two general methods for explaining of a model's decisions for individual instances have been introduced which are based on the decomposition of a model's prediction into contributions of each attribute. By exploiting the marginalization property of the Gaussian distribution, we show that PRBF is especially suitable for these explanation techniques. By explaining the PRBF's decisions for new unlabeled cases we demonstrate resulting methods and accompany presentation with visualization technique that works both for single instances as well as for the attributes and their values, thus providing a valuable tool for inspection of the otherwise opaque models.