Using feature selection approaches to find the dependent features

  • Authors:
  • Qin Yang;Elham Salehi;Robin Gras

  • Affiliations:
  • School of Computer Science, University of Windsor, Windsor, ON, Canada;School of Computer Science, University of Windsor, Windsor, ON, Canada;School of Computer Science, University of Windsor, Windsor, ON, Canada

  • Venue:
  • ICAISC'10 Proceedings of the 10th international conference on Artificial intelligence and soft computing: Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dependencies among the features can decrease the performance and efficiency in many algorithms. Traditional methods can only find the linear dependencies or the dependencies among few features. In our research, we try to use feature selection approaches for finding dependencies. We use and compare Relief, CFS, NB-GA and NB-BOA as feature selection approaches to find the dependent features among our artificial data. Unexpectedly, Relief has the best performance in our experiments, even better than NB-BOA, which is a population-based evolutionary algorithm that used the population distribution information to find the dependent features. It may be because some weak "link strengths" between features or due to the fact that Naïve Bayes classifier which is used in these wrapper approaches cannot represent the dependencies between features. However, the exact reason for these results still is an open problem for our future work.