The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Prototype selection for the nearest neighbour rule through proximity graphs
Pattern Recognition Letters
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
Advances in Instance Selection for Instance-Based Learning Algorithms
Data Mining and Knowledge Discovery
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
On the Inequality of Cover and Hart in Nearest Neighbor Discrimination
IEEE Transactions on Pattern Analysis and Machine Intelligence
Geometric decision rules for instance-based learning problems
PReMI'05 Proceedings of the First international conference on Pattern Recognition and Machine Intelligence
Which is the best multiclass SVM method? an empirical study
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
The condensed nearest neighbor rule (Corresp.)
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Previous experiments with low dimensional data sets have shown that Gabriel graph methods for instance-based learning are among the best machine learning algorithms for pattern classification applications. However, as the dimensionality of the data grows large, all data points in the training set tend to become Gabriel neighbors of each other, bringing the efficacy of this method into question. Indeed, it has been conjectured that for high-dimensional data, proximity graph methods that use sparser graphs, such as relative neighbor graphs (RNG) and minimum spanning trees (MST) would have to be employed in order to maintain their privileged status. Here the performance of proximity graph methods, in instance-based learning, that employ Gabriel graphs, relative neighborhood graphs, and minimum spanning trees, are compared experimentally on high-dimensional data sets. These methods are also compared empirically against the traditional k-NN rule and support vector machines (SVMs), the leading competitors of proximity graph methods.