Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Ensembling neural networks: many could be better than all
Artificial Intelligence
Combining One-Class Classifiers
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
A Consistency-Based Model Selection for One-Class Classification
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Feature bagging for outlier detection
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Classifier ensembles: Select real-world applications
Information Fusion
Collective-agreement-based pruning of ensembles
Computational Statistics & Data Analysis
ACM Computing Surveys (CSUR)
Multiple classifier systems under attack
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
One-class classification with Gaussian processes
Pattern Recognition
Hi-index | 0.00 |
The goal of one-class classification is to distinguish the target class from all the other classes using only training data from the target class. Because it is difficult for a single one-class classifier to capture all the characteristics of the target class, combining several one-class classifiers may be required. Previous research has shown that the Random Subspace Method (RSM), in which classifiers are trained on different subsets of the feature space, can be effective for one-class classifiers. In this paper we show that the performance by the RSM can be noisy, and that pruning inaccurate classifiers from the ensemble can be more effective than using all available classifiers. We propose to apply pruning to RSM of one-class classifiers using a supervised area under the ROC curve (AUC) criterion or an unsupervised consistency criterion. It appears that when the AUC criterion is used, the performance may be increased dramatically, while for the consistency criterion results do not improve, but only become more predictable.