Machine Learning
Shape quantization and recognition with randomized trees
Neural Computation
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Data Mining: An Overview from a Database Perspective
IEEE Transactions on Knowledge and Data Engineering
ICDAR '95 Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1
Improve the Performance of Random Forests by Introducing Weight Update Technique
IHMSC '10 Proceedings of the 2010 Second International Conference on Intelligent Human-Machine Systems and Cybernetics - Volume 01
Hi-index | 0.00 |
Data classification is a major problem in data mining and machine learning. The process involves construction of a model from a set of historical data instances having one of the features designated as the class. This model is then used to classify instances in which the class feature is unknown. An important development in data classification has been the use of a set of classifiers that are built from different, but possibly overlapping sets of instances. This approach is known as ensemble-based classification. Random Forests is an example of an ensemble-based classification where the model outputs of many trees are used to classify an instance. Developed by Breiman in 2001, this technique has proved to be effective and a representative of the state-of-the-art in data classification. In this paper we propose an important enhancement to the technique in order to boost the overall performance of Random Forests. Random Forests take two parameters: the number of trees and the number of features to be randomly drawn from the set of all the features at each split in the tree. We shall investigate and incorporate the use of an information theoretic approach to evaluating the predictive power of the features in a given dataset, namely, Information Gain. We shall show experimentally that the predictive power of the features provides a guide to the setting of the second parameter of Random Forests number of randomly drawn features to split on.