Original Contribution: Stacked generalization
Neural Networks
From optimal hyperplanes to optimal decision trees
Fundamenta Informaticae
Data mining methods for knowledge discovery
Data mining methods for knowledge discovery
Rough Sets: Theoretical Aspects of Reasoning about Data
Rough Sets: Theoretical Aspects of Reasoning about Data
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Combining Nearest Neighbor Classifiers Through Multiple Feature Subsets
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Covering with Reducts - A Fast Algorithm for Rule Generation
RSCTC '98 Proceedings of the First International Conference on Rough Sets and Current Trends in Computing
LTF-C: Architecture, Training Algorithm and Applications of New Neural Classifier
Fundamenta Informaticae
RIONA: A New Classification System Combining Rule Induction and Instance-Based Learning
Fundamenta Informaticae
Feature Selection Algorithm for Multiple Classifier Systems: A Hybrid Approach
Fundamenta Informaticae - Concurrency Specification and Programming (CS&P)
Satisfiability of Formulas from the Standpoint of Object Classification: The RST Approach
Fundamenta Informaticae - Concurrency Specification and Programming (CS&P)
Hi-index | 0.00 |
During the past decade methods of multiple classifier systems have been developed as a practical and effective solution for a variety of challenging applications. A wide number of techniques and methodologies for combining classifiers have been proposed in the past years in literature. In our work we present a new approach to multiple classifier systems using rough sets to construct classifier ensembles. Rough set methods provide us with various useful techniques of data classification. In the paper, we also present a method of reduction of the data set with the use of multiple classifiers. Reduction of the data set is performed on attributes and allows to decrease the number of conditional attributes in the decision table. Our method helps to decrease the number of conditional attributes of the data with a small loss on classification accuracy.