Ensemble selection for superparent-one-dependence estimators

  • Authors:
  • Ying Yang;Kevin Korb;Kai Ming Ting;Geoffrey I. Webb

  • Affiliations:
  • School of Computer Science and Software Engineering, Faculty of Information Technology, Monash University, VIC, Australia;School of Computer Science and Software Engineering, Faculty of Information Technology, Monash University, VIC, Australia;School of Computer Science and Software Engineering, Faculty of Information Technology, Monash University, VIC, Australia;School of Computer Science and Software Engineering, Faculty of Information Technology, Monash University, VIC, Australia

  • Venue:
  • AI'05 Proceedings of the 18th Australian Joint conference on Advances in Artificial Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

SuperParent-One-Dependence Estimators (SPODEs) loosen Naive-Bayes’ attribute independence assumption by allowing each attribute to depend on a common single attribute (superparent) in addition to the class. An ensemble of SPODEs is able to achieve high classification accuracy with modest computational cost. This paper investigates how to select SPODEs for ensembling. Various popular model selection strategies are presented. Their learning efficacy and efficiency are theoretically analyzed and empirically verified. Accordingly, guidelines are investigated for choosing between selection criteria in differing contexts.