Machine Learning
Visualization and exploration of high-dimensional functions using the functional anova decomposition
Visualization and exploration of high-dimensional functions using the functional anova decomposition
Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
Mathematics and Computers in Simulation - IMACS sponsored Special issue on the second IMACS seminar on Monte Carlo methods
Fast Algorithms for Mining Association Rules in Large Databases
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
Quasi-regression with shrinkage
Mathematics and Computers in Simulation - Special issue: 3rd IMACS seminar on Monte Carlo methods - MCM 2001
Detecting statistical interactions with additive groves of trees
Proceedings of the 25th international conference on Machine learning
Additive Groves of Regression Trees
ECML '07 Proceedings of the 18th European conference on Machine Learning
Accurate intelligible models with pairwise interactions
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.00 |
Many automated learning procedures lack interpretability, operating effectively as a black box: providing a prediction tool but no explanation of the underlying dynamics that drive it. A common approach to interpretation is to plot the dependence of a learned function on one or two predictors. We present a method that seeks not to display the behavior of a function, but to evaluate the importance of non-additive interactions within any set of variables. Should the function be close to a sum of low dimensional components, these components can be viewed and even modeled parametrically. Alternatively, the work here provides an indication of where intrinsically high-dimensional behavior takes place.The calculations used in this paper correspond closely with the functional ANOVA decomposition; a well-developed construction in Statistics. In particular, the proposed score of interaction importance measures the loss associated with the projection of the prediction function onto a space of additive models. The algorithm runs in linear time and we present displays of the output as a graphical model of the function for interpretation purposes.