Computational geometry: an introduction
Computational geometry: an introduction
Instance-Based Learning Algorithms
Machine Learning
Bumptrees for efficient function, constraint, and classification learning
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
An optimal algorithm for approximate nearest neighbor searching fixed dimensions
Journal of the ACM (JACM)
An Algorithm for Finding Best Matches in Logarithmic Expected Time
ACM Transactions on Mathematical Software (TOMS)
M-tree: An Efficient Access Method for Similarity Search in Metric Spaces
VLDB '97 Proceedings of the 23rd International Conference on Very Large Data Bases
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
Similarity Search in High Dimensions via Hashing
VLDB '99 Proceedings of the 25th International Conference on Very Large Data Bases
The Anchors Hierarchy: Using the Triangle Inequality to Survive High Dimensional Data
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
A study of instance-based algorithms for supervised learning tasks: mathematical, empirical, and psychological evaluations
Supervised classification for video shot segmentation
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 1
New Algorithms for Efficient High-Dimensional Nonparametric Classification
The Journal of Machine Learning Research
BoostMap: An Embedding Method for Efficient Nearest Neighbor Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
This paper is about a variant of k nearest neighbor classification on large many-class high dimensional datasets.K nearest neighbor remains a popular classification technique, especially in areas such as computer vision, drug activity prediction and astrophysics. Furthermore, many more modern classifiers, such as kernel-based Bayes classifiers or the prediction phase of SVMs, require computational regimes similar to k-NN. We believe that tractable k-NN algorithms therefore continue to be important.This paper relies on the insight that even with many classes, the task of finding the majority class among the k nearest neighbors of a query need not require us to explicitly find those k nearest neighbors. This insight was previously used in (Liu et al., 2003) in two algorithms called KNS2 and KNS3 which dealt with fast classification in the case of two classes. In this paper we show how a different approach, IOC (standing for the International Olympic Committee) can apply to the case of n classes where n 2.IOC assumes a slightly different processing of the datapoints in the neighborhood of the query. This allows it to search a set of metric trees, one for each class. During the searches it is possible to quickly prune away classes that cannot possibly be the majority.We give experimental results on datasets of up to 5.8 x 105 records and 1.5 x 103 attributes, frequently showing an order of magnitude acceleration compared with each of (i) conventional linear scan, (ii) a well-known independent SR-tree implementation of conventional k-NN and (iii) a highly optimized conventional k-NN metric tree search.