Communications of the ACM
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Cause-effect relationships and partially defined Boolean functions
Annals of Operations Research
Instance-Based Learning Algorithms
Machine Learning
Computational learning theory: an introduction
Computational learning theory: an introduction
C4.5: programs for machine learning
C4.5: programs for machine learning
An introduction to computational learning theory
An introduction to computational learning theory
Logical analysis of numerical data
Mathematical Programming: Series A and B - Special issue: papers from ismp97, the 16th international symposium on mathematical programming, Lausanne EPFL
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
An Implementation of Logical Analysis of Data
IEEE Transactions on Knowledge and Data Engineering
On-Line Confidence Machines Are Well-Calibrated
FOCS '02 Proceedings of the 43rd Symposium on Foundations of Computer Science
Transduction with Confidence and Credibility
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Asymptotic Optimality of Transductive Confidence Machine
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Machine Learning
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
A Boolean measure of similarity
Discrete Applied Mathematics - Special issue: Discrete algorithms and optimization, in honor of professor Toshihide Ibaraki at his retirement from Kyoto University
A new imputation method for incomplete binary data
Discrete Applied Mathematics
Hi-index | 0.04 |
This paper concerns classification by Boolean functions. We investigate the classification accuracy obtained by standard classification techniques on unseen points (elements of the domain, {0,1}^n, for some n) that are similar, in particular senses, to the points that have been observed as training observations. Explicitly, we use a new measure of how similar a point x@?{0,1}^n is to a set of such points to restrict the domain of points on which we offer a classification. For points sufficiently dissimilar, no classification is given. We report on experimental results which indicate that the classification accuracies obtained on the resulting restricted domains are better than those obtained without restriction. These experiments involve a number of standard data-sets and classification techniques. We also compare the classification accuracies with those obtained by restricting the domain on which classification is given by using the Hamming distance.