Evaluating standard techniques for implicit diversity

  • Authors:
  • Ulf Johansson;Tuve Löfström;Lars Niklasson

  • Affiliations:
  • University of Borås, Department of Business and Informatics, Borås, Sweden;University of Borås, Department of Business and Informatics, Borås, Sweden and University of Skövde, Department of Humanities and Informatics, Skövde, Sweden;University of Skövde, Department of Humanities and Informatics, Skövde, Sweden

  • Venue:
  • PAKDD'08 Proceedings of the 12th Pacific-Asia conference on Advances in knowledge discovery and data mining
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

When performing predictive modeling, ensembles are often utilized in order to boost accuracy. The problem of how to maximize ensemble accuracy is, however, far from solved. In particular, the relationship between ensemble diversity and accuracy is, especially for classification, not completely understood. More specifically, the fact that ensemble diversity and base classifier accuracy are highly correlated, makes it necessary to balance these properties instead of just maximizing diversity. In this study, three standard techniques to obtain implicit diversity in neural network ensembles are evaluated using 14 UCI data sets. The experiments show that standard resampling; i.e. dividing the training data by instances, produces more diverse models, but at the expense of base classifier accuracy, thus resulting in less accurate ensembles. Building ensembles using neural networks with heterogeneous architectures improves test set accuracies, but without actually increasing the diversity. The results regarding resampling using features are inconclusive, the ensembles become more diverse, but the level of test set accuracies is unchanged. For the setups evaluated, ensemble training accuracy and base classifier training accuracy are positively correlated with ensemble test accuracy, but the opposite holds for diversity; i.e. ensembles with low diversity are generally more accurate.