An entropy-based approach to enhancing Random Forests

  • Authors:
  • Mohamed Medhat Gaber;Harinder Singh Atwal

  • Affiliations:
  • School of Computing, University of Portsmouth, Portsmouth Hampshire, UK;School of Computing, University of Portsmouth, Portsmouth Hampshire, UK

  • Venue:
  • Intelligent Decision Technologies
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Data classification is a major problem in data mining and machine learning. The process involves construction of a model from a set of historical data instances having one of the features designated as the class. This model is then used to classify instances in which the class feature is unknown. An important development in data classification has been the use of a set of classifiers that are built from different, but possibly overlapping sets of instances. This approach is known as ensemble-based classification. Random Forests is an example of an ensemble-based classification where the model outputs of many trees are used to classify an instance. Developed by Breiman in 2001, this technique has proved to be effective and a representative of the state-of-the-art in data classification. In this paper we propose an important enhancement to the technique in order to boost the overall performance of Random Forests. Random Forests take two parameters: the number of trees and the number of features to be randomly drawn from the set of all the features at each split in the tree. We shall investigate and incorporate the use of an information theoretic approach to evaluating the predictive power of the features in a given dataset, namely, Information Gain. We shall show experimentally that the predictive power of the features provides a guide to the setting of the second parameter of Random Forests number of randomly drawn features to split on.