The Strength of Weak Learnability
Machine Learning
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Instance-Based Learning Algorithms
Machine Learning
Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms
International Journal of Man-Machine Studies - Special issue: symbolic problem solving in noisy and novel task environments
Similarity metric learning for a variable-kernel classifier
Neural Computation
Machine Learning
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Lazy learning
Voting over multiple condensed nearest neighbors
Lazy learning
Initializing RBF-networks with small subsets of training examples
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
An Algorithm for Finding Best Matches in Logarithmic Expected Time
ACM Transactions on Mathematical Software (TOMS)
Data Warehousing, Data Mining, and Olap
Data Warehousing, Data Mining, and Olap
Using Correspondence Analysis to Combine Classifiers
Machine Learning
Selecting Typical Instances in Instance-Based Learning
ML '92 Proceedings of the Ninth International Workshop on Machine Learning
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Using k-d Trees to Improve the Retrieval Step in Case-Based Reasoning
EWCBR '93 Selected papers from the First European Workshop on Topics in Case-Based Reasoning
Voting Nearest-Neighbor Subclassifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
INDUCTION FROM MULTI-LABEL EXAMPLES IN INFORMATION RETRIEVAL SYSTEMS: A CASE STUDY
Applied Artificial Intelligence
Hi-index | 0.00 |
An important issue in nearest-neighbor classifiers is how to reduce the size of large sets of examples. Whereas many researchers recommend to replace the original set with a carefully selected subset, we investigate a mechanism that creates three or more such subsets. The idea is to make sure that each of them, when used as a 1-NN subclassifier, tends to err in a different part of the instance space. In this case, failures of individuals can be corrected by voting. The costs of our example-selection procedure are linear in the size of the original training set and, as our experiments demonstrate, dramatic data reduction can be achieved without a major drop in classification accuracy.