Visualization techniques utilizing the sensitivity analysis of models

  • Authors:
  • Ivo Kondapaneni;Pavel Kordík;Pavel Slavík

  • Affiliations:
  • Czech Technical University in Prague, Czech Republic;Czech Technical University in Prague, Czech Republic;Czech Technical University in Prague, Czech Republic

  • Venue:
  • Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Models of real world systems are being increasingly generated from data that describes the behaviour of systems. Data mining techniques, such as Artificial Neural Networks (ANN), generate models almost independently and deliver accurate models in a very short time. These models (sometimes called black box models) have complex internal structures that are difficult to interpret and we have very limited information about the credibility of their output. A model can be trusted just for certain configurations of input variables, but it is hard to determine which output is based on training data and which is random. In this paper, we present visualization techniques for exploration of models. Primary goal is to consider the behavior of the model in the neighborhood of the data vectors. The next goal is to estimate and locate the ranges in input space where the models are credible. We have developed visualization techniques both for regression and classification problems. Finally, we present an algorithm that is able to automatically locate the most interesting visualizations in the vast multidimensional space of input variables.