Wrapping the Naive Bayes classifier to relax the effect of dependences

  • Authors:
  • Jose Carlos Cortizo;Ignacio Giraldez;Mari Cruz Gaya

  • Affiliations:
  • Artificial Intelligence & Network Solutions S.L., and Universidad Europea de Madrid, Madrid, Spain;Universidad Europea de Madrid, Madrid, Spain;Universidad Europea de Madrid, Madrid, Spain

  • Venue:
  • IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Naive Bayes Classifier is based on the (unrealistic) assumption of independence among the values of the attributes given the class value. Consequently, its effectiveness may decrease in the presence of interdependent attributes. In spite of this, in recent years, Naive Bayes classifier is worked for a privilege position due to several reasons [1]. We present DGW (Dependency Guided Wrapper), a wrapper that uses information about dependences to transform the data representation to improve the Naive Bayes classification. This paper presents experiments comparing the performance and execution time of 12 DGW variations against 12 previous approaches, as constructive induction of cartesian product attributes, and wrappers that perform a search for optimal subsets of attributes. Experimental results show that DGW generates a new data representation that allows the Naive Bayes to obtain better accuracy more times than any other wrapper tested. DGW variations also obtain the best possible accuracy more often than the state of the art wrappers while often spending less time in the attribute subset search process.