Correcting evaluation bias of relational classifiers with network cross validation

  • Authors:
  • Jennifer Neville;Brian Gallagher;Tina Eliassi-Rad;Tao Wang

  • Affiliations:
  • Purdue University, Departments of Computer Science and Statistics, West Lafayette, IN, USA;Lawrence Livermore National Laboratory, Livermore, CA, USA;Rutgers University, Department of Computer Science, Piscataway, NJ, USA;Purdue University, Department of Computer Science, West Lafayette, IN, USA

  • Venue:
  • Knowledge and Information Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess the models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). We propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1−Type II error).