Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Distributed Pasting of Small Votes
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
SIBGRAPI '07 Proceedings of the XX Brazilian Symposium on Computer Graphics and Image Processing
Supervised pattern classification based on optimum-path forest
International Journal of Imaging Systems and Technology - Contemporary Challenges in Combinatorial Image Analysis
A Labelled Graph Based Multiple Classifier System
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
“Good” and “bad” diversity in majority vote ensembles
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
CIARP'11 Proceedings of the 16th Iberoamerican Congress conference on Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
Hi-index | 0.00 |
The Optimum-Path Forest (OPF) classifier is a recent and promising method for pattern recognition, with a fast training algorithm and good accuracy results. Therefore, the investigation of a combining method for this kind of classifier can be important for many applications. In this paper we report a fast method to combine OPF-based classifiers trained with disjoint training subsets. Given a fixed number of subsets, the algorithm chooses random samples, without replacement, from the original training set. Each subset accuracy is improved by a learning procedure. The final decision is given by majority vote. Experiments with simulated and real data sets showed that the proposed combining method is more efficient and effective than naive approach provided some conditions. It was also showed that OPF training step runs faster for a series of small subsets than for the whole training set. The combining scheme was also designed to support parallel or distributed processing, speeding up the procedure even more.