The Journal of Machine Learning Research
The Journal of Machine Learning Research
IEEE Transactions on Signal Processing
Nested support vector machines
IEEE Transactions on Signal Processing
Adaptive estimation of the optimal ROC curve and a bipartite ranking algorithm
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Semi-Supervised Novelty Detection
The Journal of Machine Learning Research
Batch and online learning algorithms for nonconvex neyman-pearson classification
ACM Transactions on Intelligent Systems and Technology (TIST)
Neyman-Pearson Classification, Convexity and Stochastic Constraints
The Journal of Machine Learning Research
Journal of Computer and System Sciences
High-throughput 3D modelling of plants for phenotypic analysis
Proceedings of the 27th Conference on Image and Vision Computing New Zealand
Moving steganography and steganalysis from the laboratory into the real world
Proceedings of the first ACM workshop on Information hiding and multimedia security
A bagging SVM to learn from positive and unlabeled examples
Pattern Recognition Letters
A plug-in approach to neyman-pearson classification
The Journal of Machine Learning Research
Hi-index | 754.84 |
The Neyman-Pearson (NP) approach to hypothesis testing is useful in situations where different types of error have different consequences or a priori probabilities are unknown. For any α0, the NP lemma specifies the most powerful test of size α, but assumes the distributions for each hypothesis are known or (in some cases) the likelihood ratio is monotonic in an unknown parameter. This paper investigates an extension of NP theory to situations in which one has no knowledge of the underlying distributions except for a collection of independent and identically distributed (i.i.d.) training examples from each hypothesis. Building on a "fundamental lemma" of Cannon et al., we demonstrate that several concepts from statistical learning theory have counterparts in the NP context. Specifically, we consider constrained versions of empirical risk minimization (NP-ERM) and structural risk minimization (NP-SRM), and prove performance guarantees for both. General conditions are given under which NP-SRM leads to strong universal consistency. We also apply NP-SRM to (dyadic) decision trees to derive rates of convergence. Finally, we present explicit algorithms to implement NP-SRM for histograms and dyadic decision trees.