Machine Learning
Machine Learning - Special issue on inductive transfer
Machine Learning - Special issue on learning with probabilistic representations
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
Pairwise Classification as an Ensemble Technique
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
A Unified Decomposition of Ensemble Loss for Predicting Ensemble Performance
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Benefitting from the variables that variable selection discards
The Journal of Machine Learning Research
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
MTForest: Ensemble Decision Trees based on Multi-Task Learning
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
A Novel Bayes Model: Hidden Naive Bayes
IEEE Transactions on Knowledge and Data Engineering
A model of inductive bias learning
Journal of Artificial Intelligence Research
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Semi-naive Exploitation of One-Dependence Estimators
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Hi-index | 0.01 |
It is well known that diversity among component classifiers is crucial for constructing a strong ensemble Most existing ensemble methods achieve this goal through resampling the training instances or input features Inspired by MTForest and AODE that enumerate each input attribute together with the class attribute to create different component classifiers in the ensemble In this paper, we propose a novel general ensemble method based on manipulating the class labels It generates different biased new class labels through the Cartesian product of the class attribute and each input attribute, and then builds a component classifier for each of them Extensive experiments, using decision tree and naive Bayes as base classifier respectively, show that the accuracy of our method is comparable to state-of-the-art ensemble methods Finally, the bias-variance decomposition results reveal that the success of our method mainly lies in that it can significantly reduce the bias of the base learner.