Rule induction with CN2: some recent improvements
EWSL-91 Proceedings of the European working session on learning on Machine learning
Original Contribution: Stacked generalization
Neural Networks
On-line unsupervised outlier detection using finite mixtures with discounting learning algorithms
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Partially Supervised Classification of Text Documents
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Combining One-Class Classifiers
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
One-class svms for document classification
The Journal of Machine Learning Research
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Troika - An improved stacking schema for classification tasks
Information Sciences: an International Journal
Who should I cite: learning literature search models from citation behavior
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Metric anomaly detection via asymmetric risk minimization
SIMBAD'11 Proceedings of the First international conference on Similarity-based pattern recognition
Weighted bagging for graph based one-class classifiers
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
ACTIDS: an active strategy for detecting and localizing network attacks
Proceedings of the 2013 ACM workshop on Artificial intelligence and security
Hi-index | 0.00 |
Selecting the best classifier among the available ones is a difficult task, especially when only instances of one class exist. In this work we examine the notion of combining one-class classifiers as an alternative for selecting the best classifier. In particular, we propose two one-class classification performance measures to weigh classifiers and show that a simple ensemble that implements these measures can outperform the most popular one-class ensembles. Furthermore, we propose a new one-class ensemble scheme, TUPSO, which uses meta-learning to combine one-class classifiers. Our experiments demonstrate the superiority of TUPSO over all other tested ensembles and show that the TUPSO performance is statistically indistinguishable from that of the hypothetical best classifier.