Induction of one-level decision trees
ML92 Proceedings of the ninth international workshop on Machine learning
Neural Computation
Machine Learning
Artificial Intelligence Review - Special issue on lazy learning
Lazy learning
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Advances in Instance Selection for Instance-Based Learning Algorithms
Data Mining and Knowledge Discovery
A Mathematically Rigorous Foundation for Supervised Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Automatic Model Selection in a Hybrid Perceptron/Radial Network
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Rotation Forest: A New Classifier Ensemble Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Expert Systems with Applications: An International Journal
Estimation and decision fusion: A survey
Neurocomputing
Constructing diverse classifier ensembles using artificial training examples
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
An experimental study on rotation forest ensembles
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
Many data analysis problems involve an investigation of relationships between attributes in heterogeneous databases, where different prediction models can be more appropriate for different regions. We propose a technique of local rotation-based ensemble of weak classifiers. In order to determine rotation forests, we identify local regions having similar characteristics and then build local classification experts on each of these regions describing the relationship between the data characteristics and the target class. We performed a comparison with other well-known combining methods using weak classifiers as based learners, on standard benchmark datasets and we took better accuracy.