Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Decision-making processes in pattern recognition (ACM monograph series)
Decision-making processes in pattern recognition (ACM monograph series)
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
A theory for memory-based learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning stochastic functions by smooth simultaneous estimation
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition and Valiant's Learning Framework
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
On an Asymptotically Optimal Adaptive Classifier Design Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Best-Case Results for Nearest-Neighbor Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data Complexity Analysis for Classifier Combination
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Machine learning with data dependent hypothesis classes
The Journal of Machine Learning Research
Binary Partitioning, Perceptual Grouping, and Restoration with Semidefinite Programming
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Computation
Data Complexity Analysis: Linkage between Context and Solution in Classification
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
The problem of induction and machine learning
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.14 |
A test sequence is used to select the best rule from a class of discrimination rules defined in terms of the training sequence. The Vapnik-Chervonenkis and related inequalities are used to obtain distribution-free bounds on the difference between the probability of error of the selected rule and the probability of error of the best rule in the given class. The bounds are used to prove the consistency and asymptotic optimality for several popular classes, including linear discriminators, nearest-neighbor rules, kernel-based rules, histogram rules, binary tree classifiers, and Fourier series classifiers. In particular, the method can be used to choose the smoothing parameter in kernel-based rules, to choose k in the k-nearest neighbor rule, and to choose between parametric and nonparametric rules.