Learning in the presence of malicious errors
SIAM Journal on Computing
The nature of statistical learning theory
The nature of statistical learning theory
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Prediction, Learning, and Games
Prediction, Learning, and Games
Nightmare at test time: robust learning by feature deletion
ICML '06 Proceedings of the 23rd international conference on Machine learning
Analysis of Perceptron-Based Active Learning
The Journal of Machine Learning Research
A framework for quantitative security analysis of machine learning
Proceedings of the 2nd ACM workshop on Security and artificial intelligence
Robustness and Regularization of Support Vector Machines
The Journal of Machine Learning Research
The security of machine learning
Machine Learning
Hi-index | 0.00 |
Learning for security applications is an emerging field where adaptive approaches are needed but are complicated by changing adversarial behavior. Traditional approaches to learning assume benign errors in data and thus may be vulnerable to adversarial errors. In this paper, we incorporate the notion of adversarial corruption directly into the learning framework and derive a new criteria for classifier robustness to adversarial contamination.