Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Rotation Forest: A New Classifier Ensemble Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Biostatistical Analysis (5th Edition)
Biostatistical Analysis (5th Edition)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Forest-RK: A New Random Forest Induction Method
ICIC '08 Proceedings of the 4th international conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications - with Aspects of Artificial Intelligence
Learning decision tree for ranking
Knowledge and Information Systems
Improved heterogeneous distance functions
Journal of Artificial Intelligence Research
A Combination Approach to Cluster Validation Based on Statistical Quantiles
IJCBS '09 Proceedings of the 2009 International Joint Conference on Bioinformatics, Systems Biology and Intelligent Computing
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
On the selection of decision trees in random forests
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
On the quest for optimal rule learning heuristics
Machine Learning
A six stage approach for the diagnosis of the Alzheimer's disease based on fMRI data
Journal of Biomedical Informatics
A study on the use of statistical tests for experimentation with neural networks
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Learning random forests for ranking
Frontiers of Computer Science in China
A parallel random forest classifier for R
Proceedings of the second international workshop on Emerging computational methods for the life sciences
Dynamic integration with random forests
ECML'06 Proceedings of the 17th European conference on Machine Learning
Proceedings of the 6th international conference on Multiple Classifier Systems
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Proceedings of the 6th international conference on Multiple Classifier Systems
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Fast semantic image retrieval based on random forest
Proceedings of the 20th ACM international conference on Multimedia
Random forest for image annotation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Data & Knowledge Engineering
Hi-index | 0.00 |
The aim of this work is to propose modifications of the Random Forests algorithm which improve its prediction performance. The suggested modifications intend to increase the strength and decrease the correlation of individual trees of the forest and to improve the function which determines how the outputs of the base classifiers are combined. This is achieved by modifying the node splitting and the voting procedure. Different approaches concerning the number of the predictors and the evaluation measure which determines the impurity of the node are examined. Regarding the voting procedure, modifications based on feature selection, clustering, nearest neighbors and optimization techniques are proposed. The novel feature of the current work is that it proposes modifications, not only for the improvement of the construction or the voting mechanisms but also, for the first time, it examines the overall improvement of the Random Forests algorithm (a combination of construction and voting). We evaluate the proposed modifications using 24 datasets. The evaluation demonstrates that the proposed modifications have positive effect on the performance of the Random Forests algorithm and they provide comparable, and, in most cases, better results than the existing approaches.