Approximate solution of the trust region problem by minimization over two-dimensional subspaces
Mathematical Programming: Series A and B
SIAM Journal on Scientific Computing
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Nightmare at test time: robust learning by feature deletion
ICML '06 Proceedings of the 23rd international conference on Machine learning
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
A Game Theoretical Model for Adversarial Learning
ICDMW '09 Proceedings of the 2009 IEEE International Conference on Data Mining Workshops
A case-based technique for tracking concept drift in spam filtering
Knowledge-Based Systems
Mining adversarial patterns via regularized loss minimization
Machine Learning
Classifier evaluation and attribute selection against active adversaries
Data Mining and Knowledge Discovery
Stackelberg games for adversarial prediction problems
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
SVD-Based Modeling for Image Texture Classification Using Wavelet Transformation
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.