Rule induction with CN2: some recent improvements
EWSL-91 Proceedings of the European working session on learning on Machine learning
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Robust Classification for Imprecise Environments
Machine Learning
Using Correspondence Analysis to Combine Classifiers
Machine Learning
Pairwise Classification as an Ensemble Technique
ECML '02 Proceedings of the 13th European Conference on Machine Learning
How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
An Evaluation of Grading Classifiers
IDA '01 Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Decision-tree instance-space decomposition with grouped gain-ratio
Information Sciences: an International Journal
Data-driven decomposition for multi-class classification
Pattern Recognition
A neural network ensemble method with jittered training data for time series forecasting
Information Sciences: an International Journal
Improving malware detection by applying multi-inducer ensemble
Computational Statistics & Data Analysis
Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
Computational Statistics & Data Analysis
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Issues in stacked generalization
Journal of Artificial Intelligence Research
Estimating continuous distributions in Bayesian classifiers
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
IEEE Transactions on Neural Networks
Efficient classification for multiclass problems using modular neural networks
IEEE Transactions on Neural Networks
Computational Statistics & Data Analysis
Privacy-preserving data mining: A feature set partitioning approach
Information Sciences: an International Journal
An experimental study of one- and two-level classifier fusion for different sample sizes
Pattern Recognition Letters
Classification by cluster analysis: a new meta-learning based approach
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
An incremental ensemble of classifiers
Artificial Intelligence Review
Information Sciences: an International Journal
A meta learning approach to grammatical error correction
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2
The use of artificial-intelligence-based ensembles for intrusion detection: a review
Applied Computational Intelligence and Soft Computing
Combining one-class classifiers via meta learning
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
ACTIDS: an active strategy for detecting and localizing network attacks
Proceedings of the 2013 ACM workshop on Artificial intelligence and security
Hi-index | 0.07 |
Stacking is a general ensemble method in which a number of base classifiers are combined using one meta-classifier which learns their outputs. Such an approach provides certain advantages: simplicity; performance that is similar to the best classifier; and the capability of combining classifiers induced by different inducers. The disadvantage of stacking is that on multiclass problems, stacking seems to perform worse than other meta-learning approaches. In this paper we present Troika, a new stacking method for improving ensemble classifiers. The new scheme is built from three layers of combining classifiers. The new method was tested on various datasets and the results indicate the superiority of the proposed method to other legacy ensemble schemes, Stacking and StackingC, especially when the classification task consists of more than two classes.