Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
A Comparison of Decision Tree Ensemble Creation Techniques
IEEE Transactions on Pattern Analysis and Machine Intelligence
Decision tree search methods in fuzzy modeling and classification
International Journal of Approximate Reasoning
Classification by ensembles from random partitions of high-dimensional data
Computational Statistics & Data Analysis
Soft Computing - A Fusion of Foundations, Methodologies and Applications
Fuzzy classifier identification using decision tree and multiobjective evolutionary algorithms
International Journal of Approximate Reasoning
International Journal of Approximate Reasoning
International Journal of Approximate Reasoning
Soft Computing - A Fusion of Foundations, Methodologies and Applications
Genetic learning of fuzzy rules based on low quality data
Fuzzy Sets and Systems
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Fast meta-models for local fusion of multiple predictive models
Applied Soft Computing
Fuzzy decision trees: issues and methods
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
"Fuzzy" versus "nonfuzzy" in combining classifiers designed by Boosting
IEEE Transactions on Fuzzy Systems
Diagnosis of dyslexia with low quality data with genetic fuzzy systems
International Journal of Approximate Reasoning
Aggregating multiple classification results using fuzzy integration and stochastic feature selection
International Journal of Approximate Reasoning
Classifier fusion in the Dempster--Shafer framework using optimized t-norm based combination rules
International Journal of Approximate Reasoning
The impact of diversity on the accuracy of evidential classifier ensembles
International Journal of Approximate Reasoning
Lazy meta-learning: creating customized model ensembles on demand
WCCI'12 Proceedings of the 2012 World Congress conference on Advances in Computational Intelligence
Feature subset selection Filter-Wrapper based on low quality data
Expert Systems with Applications: An International Journal
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Hi-index | 0.02 |
When individual classifiers are combined appropriately, a statistically significant increase in classification accuracy is usually obtained. Multiple classifier systems are the result of combining several individual classifiers. Following Breiman's methodology, in this paper a multiple classifier system based on a ''forest'' of fuzzy decision trees, i.e., a fuzzy random forest, is proposed. This approach combines the robustness of multiple classifier systems, the power of the randomness to increase the diversity of the trees, and the flexibility of fuzzy logic and fuzzy sets for imperfect data management. Various combination methods to obtain the final decision of the multiple classifier system are proposed and compared. Some of them are weighted combination methods which make a weighting of the decisions of the different elements of the multiple classifier system (leaves or trees). A comparative study with several datasets is made to show the efficiency of the proposed multiple classifier system and the various combination methods. The proposed multiple classifier system exhibits a good accuracy classification, comparable to that of the best classifiers when tested with conventional data sets. However, unlike other classifiers, the proposed classifier provides a similar accuracy when tested with imperfect datasets (with missing and fuzzy values) and with datasets with noise.