Influence of Hyperparameters on Random Forest Accuracy

  • Authors:
  • Simon Bernard;Laurent Heutte;Sébastien Adam

  • Affiliations:
  • Université de Rouen, LITIS EA 4108, Saint-Etienne du Rouvray, France 76801;Université de Rouen, LITIS EA 4108, Saint-Etienne du Rouvray, France 76801;Université de Rouen, LITIS EA 4108, Saint-Etienne du Rouvray, France 76801

  • Venue:
  • MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present our work on the Random Forest (RF) family of classification methods. Our goal is to go one step further in the understanding of RF mechanisms by studying the parametrization of the reference algorithm Forest-RI. In this algorithm, a randomization principle is used during the tree induction process, that randomly selects K features at each node, among which the best split is chosen. The strength of randomization in the tree induction is thus led by the hyperparameter K which plays an important role for building accurate RF classifiers. We have decided to focus our experimental study on this hyperparameter and on its influence on classification accuracy. For that purpose, we have evaluated the Forest-RI algorithm on several machine learning problems and with different settings of K in order to understand the way it acts on RF performance. We show that default values of K traditionally used in the literature are globally near-optimal, except for some cases for which they are all significatively sub-optimal. Thus additional experiments have been led on those datasets, that highlight the crucial role played by feature relevancy in finding the optimal setting of K .