C4.5: programs for machine learning
C4.5: programs for machine learning
Democracy in neural nets: voting schemes for classification
Neural Networks
Feature selection for ensembles
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF
Applied Intelligence
Use of Contextual Information for Feature Ranking and Discretization
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Ensemble Feature Selection Based on the Contextual Merit
DaWaK '01 Proceedings of the Third International Conference on Data Warehousing and Knowledge Discovery
Examining Locally Varying Weights for Nearest Neighbor Algorithms
ICCBR '97 Proceedings of the Second International Conference on Case-Based Reasoning Research and Development
Decomposition of Heterogeneous Classification Problems
IDA '97 Proceedings of the Second International Symposium on Advances in Intelligent Data Analysis, Reasoning about Data
Data Mining using MLC++, A Machine Learning Library in C++
ICTAI '96 Proceedings of the 8th International Conference on Tools with Artificial Intelligence
Correlation-Based and Contextual Merit-Based Ensemble Feature Selection
IDA '01 Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis
Combining Answers of Sub-classifiers in the Bagging-Feature Ensembles
RSEISP '07 Proceedings of the international conference on Rough Sets and Intelligent Systems Paradigms
Hi-index | 0.00 |
Recent research has proven the benefits of using ensembles of classifiers for classification problems. Ensembles of diverse and accurate base classifiers are constructed by machine learning methods manipulating the training sets. One way to manipulate the training set is to use feature selection heuristics generating the base classifiers. In this paper we examine two of them: correlation-based and contextual merit -based heuristics. Both rely on quite similar assumptions concerning heterogeneous classification problems. Experiments are considered on several data sets from UCI Repository. We construct fixed number of base classifiers over selected feature subsets and refine the ensemble iteratively promoting diversity of the base classifiers and relying on global accuracy growth. According to the experimental results, contextual merit -based ensemble outperforms correlation-based ensemble as well as C4.5. Correlation-based ensemble produces more diverse and simple base classifiers, and the iterations promoting diversity have not so evident effect as for contextual merit -based ensemble.