Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
The nature of statistical learning theory
The nature of statistical learning theory
Logical analysis of numerical data
Mathematical Programming: Series A and B - Special issue: papers from ismp97, the 16th international symposium on mathematical programming, Lausanne EPFL
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Exact Learning of Discretized Geometric Concepts
SIAM Journal on Computing
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
An Implementation of Logical Analysis of Data
IEEE Transactions on Knowledge and Data Engineering
Function Learning from Interpolation
Combinatorics, Probability and Computing
Saturated systems of homogeneous boxes and the logical analysis of numerical data
Discrete Applied Mathematics - Discrete mathematics & data mining (DM & DM)
VC Theory of Large Margin Multi-Category Classifiers
The Journal of Machine Learning Research
Robust cutpoints in the logical analysis of numerical data
Discrete Applied Mathematics
IEEE Transactions on Information Theory
Hi-index | 0.04 |
The use of boxes for pattern classification has been widespread and is a fairly natural way in which to partition data into different classes or categories. In this paper we consider multi-category classifiers which are based on unions of boxes. The classification method studied may be described as follows: find boxes such that all points in the region enclosed by each box are assumed to belong to the same category, and then classify remaining points by considering their distances to these boxes, assigning to a point the category of the nearest box. This extends the simple method of classifying by unions of boxes by incorporating a natural way (based on proximity) of classifying points outside the boxes. We analyze the generalization accuracy of such classifiers and we obtain generalization error bounds that depend on a measure of how definitive is the classification of training points.