Communications of the ACM - Special issue on parallelism
A link between k nearest neighbour rules and knowledge based systems by squence analysis
Pattern Recognition Letters
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Instance-Based Learning Algorithms
Machine Learning
Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms
International Journal of Man-Machine Studies - Special issue: symbolic problem solving in noisy and novel task environments
Efficient retrieval from sparse associative memory
Artificial Intelligence
Case-based reasoning
Similarity metric learning for a variable-kernel classifier
Neural Computation
ACM Computing Surveys (CSUR)
Artificial Intelligence - Special volume on planning and scheduling
A fast branch & bound nearest neighbour classifier in metric spaces
Pattern Recognition Letters
Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Complexity Analysis for Partitioning Nearest Neighbor Searching Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Fast Algorithm for the Nearest-Neighbor Classifier
IEEE Transactions on Pattern Analysis and Machine Intelligence
Artificial Intelligence Review - Special issue on lazy learning
Prototype selection for the nearest neighbour rule through proximity graphs
Pattern Recognition Letters
Approximate nearest neighbors: towards removing the curse of dimensionality
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Fast Design of Reduced-Complexity Nearest-Neighbor Classifiers Using Triangular Inequality
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data Warehousing, Data Mining, and Olap
Data Warehousing, Data Mining, and Olap
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Artificial Intelligence Review - Special issue on lazy learning
A Search Technique for Pattern Recognition Using Relative Distances
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data Mining and Knowledge Discovery: Making Sense Out of Data
IEEE Expert: Intelligent Systems and Their Applications
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
ICCBR '95 Proceedings of the First International Conference on Case-Based Reasoning Research and Development
Improved heterogeneous distance functions
Journal of Artificial Intelligence Research
Multiple-prototype classifier design
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Nearest prototype classification: clustering, genetic algorithms, or random search?
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Distributed computation of the knn graph for large high-dimensional point sets
Journal of Parallel and Distributed Computing
IbPRIA '07 Proceedings of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part I
Fast Approximate kNN Graph Construction for High Dimensional Data via Recursive Lanczos Bisection
The Journal of Machine Learning Research
Combining elimination rules in tree-based nearest neighbor search algorithms
SSPR&SPR'10 Proceedings of the 2010 joint IAPR international conference on Structural, syntactic, and statistical pattern recognition
Hi-index | 0.00 |
This article discusses the role and significance of nearest-neighbor (NNR) approaches (and its conceptual equivalents in the field of artificial intelligence, such as instance-based learning, lazy learning, memory-based reasoning, case-based reasoning, and the like) in the data mining and knowledge discovery process. The presentation first traces the development of NNR approaches from its origins in the early fifties to the present day with appropriate historical references. In the context of data mining applications, which necessarily involve large databases, computational concerns become a major issue and NNR techniques are particularly vulnerable in this sphere. Accordingly, this aspect of NNR techniques is discussed next in great detail to provide a panoramic view of the latest developments in this area. The associated issues of attribute selection and weighting are also addressed. This is followed by an overview of the different metrics that have been proposed in the literature to meet the special needs of the data mining community in contrast to the traditional Euclidean metric and its variants such as the Manhattan (city-block) distance generally employed in the pattern recognition field. A brief but direct discussion on the well-recognized problem of the curse of dimensionality is offered next, although this subject matter is indirectly covered in prior subsections. The article concludes with a brief closing summation of the objective and scope of the presentation highlighting some of the outstanding issues in this arena.