The measure of the degree of truth and the grade of membership
Fuzzy Sets and Systems
Neural Networks
Ensemble Methods in Machine Learning
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Diverse ensembles for active learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Can machine learning be secure?
ASIACCS '06 Proceedings of the 2006 ACM Symposium on Information, computer and communications security
Spam Filtering Based On The Analysis Of Text Information Embedded Into Images
The Journal of Machine Learning Research
An Analysis on the Schemes for Detecting and Preventing ARP Cache Poisoning Attacks
ICDCSW '07 Proceedings of the 27th International Conference on Distributed Computing Systems Workshops
Image Spam Filtering Using Visual Information
ICIAP '07 Proceedings of the 14th International Conference on Image Analysis and Processing
Classification on Soft Labels Is Robust against Label Noise
KES '08 Proceedings of the 12th international conference on Knowledge-Based Intelligent Information and Engineering Systems, Part I
Adversarial Pattern Classification Using Multiple Classifiers and Randomisation
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Information fusion for computer security: State of the art and open issues
Information Fusion
Multiple Classifier Systems for Adversarial Classification Tasks
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Network Protocol Verification by a Classifier Selection Ensemble
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Hi-index | 0.00 |
Pattern recognition techniques are often used in environments (called adversarial environments) where adversaries can consciously act to limit or prevent accurate recognition performance. This can be obtained, for example, by changing labels of training data in a malicious way. While Multiple Classifier Systems (MCS) are currently used in several security applications, like intrusion detection in computer networks and spam filtering, there are very few MCS proposals that explicitly address the problem of learning in adversarial environments. In this paper we propose a general algorithm based on a multiple classifier approach to find out and clean mislabeled training samples. We will report several experiments to verify the robustness of the proposed approach to the presence of possible mislabeled samples. In particular, we will show that the performance obtained with a simple classifier trained on the training set “cleaned” by our algorithm is comparable and even better than those obtained by some state-of-the-art MCS trained on the original datasets.