Towards a Model Independent Method for Explaining Classification for Individual Instances
DaWaK '08 Proceedings of the 10th international conference on Data Warehousing and Knowledge Discovery
Explaining instance classifications with interactions of subsets of feature values
Data & Knowledge Engineering
An Efficient Explanation of Individual Classifications using Game Theory
The Journal of Machine Learning Research
How to Explain Individual Classification Decisions
The Journal of Machine Learning Research
Toolkit to support intelligibility in context-aware applications
Proceedings of the 12th ACM international conference on Ubiquitous computing
The forecasting model based on modified SVRM and PSO penalizing Gaussian noise
Expert Systems with Applications: An International Journal
A general method for visualizing and explaining black-box regression models
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part II
Efficiently explaining decisions of probabilistic RBF classification networks
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part I
Rule-based estimation of attribute relevance
RSKT'11 Proceedings of the 6th international conference on Rough sets and knowledge technology
Quality of classification explanations with PRBF
Neurocomputing
Redefinition of Decision Rules Based on the Importance of Elementary Conditions Evaluation
Fundamenta Informaticae
Intelligent Data Analysis
Explaining data-driven document classifications
MIS Quarterly
Hi-index | 0.00 |
We present a method for explaining predictions for individual instances. The presented approach is general and can be used with all classification models that output probabilities. It is based on decomposition of a model's predictions on individual contributions of each attribute. Our method works for so called black box models such as support vector machines, neural networks, and nearest neighbor algorithms as well as for ensemble methods, such as boosting and random forests. We demonstrate that the generated explanations closely follow the learned models and present a visualization technique which shows the utility of our approach and enables the comparison of different prediction methods.