Elements of information theory
Elements of information theory
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Symbolic knowledge extraction from trained neural networks: a sound approach
Artificial Intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Interpretation of Trained Neural Networks by Rule Extraction
Proceedings of the International Conference, 7th Fuzzy Days on Computational Intelligence, Theory and Applications
Class Conditional Density Estimation Using Mixtures with Constrained Component Sharing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Towards Simple, Easy-to-Understand, yet Accurate Classifiers
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
SVM and Graphical Algorithms: A Cooperative Approach
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Nomograms for visualization of naive Bayesian classifier
PKDD '04 Proceedings of the 8th European Conference on Principles and Practice of Knowledge Discovery in Databases
Nomograms for visualizing support vector machines
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review
Neural Computation
Explaining Classifications For Individual Instances
IEEE Transactions on Knowledge and Data Engineering
Visual explanation of evidence in additive classifiers
IAAI'06 Proceedings of the 18th conference on Innovative applications of artificial intelligence - Volume 2
Explaining instance classifications with interactions of subsets of feature values
Data & Knowledge Engineering
Understanding neural networks via rule extraction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
An Efficient Explanation of Individual Classifications using Game Theory
The Journal of Machine Learning Research
ChiMerge: discretization of numeric attributes
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Efficiently explaining decisions of probabilistic RBF classification networks
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part I
Shared kernel models for class conditional density estimation
IEEE Transactions on Neural Networks
An incremental training method for the probabilistic RBF network
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Recently two general methods for explaining classification models and their predictions have been introduced. Both methods are based on an idea that importance of a feature or a group of features in a specific model can be estimated by simulating lack of knowledge about the values of the feature(s). For the majority of models this requires an approximation by averaging over all possible feature values. A probabilistic radial basis function network (PRBF) is one of the models where such approximation is not necessary and therefore offers a chance to evaluate the quality of approximation by comparing it to the exact solution. We present both explanation methods and demonstrate their behavior with PRBF. The explanations make individual decisions of classifiers transparent and allow inspection and visualization of otherwise opaque models. We empirically compare the quality of explanations based on marginalization of the Gaussian distribution (the exact method) and explanation with averaging over all feature values (the approximation). The results show that the approximation method and the exact solution give very similar results, which increases the confidence in the explanation methodology also for other classification models.