Instance-Based Learning Algorithms
Machine Learning
Arbitrating among competing classifiers using learned referees
Knowledge and Information Systems
Meta-learning via Search Combined with Parameter Optimization
Proceedings of the IIS'2002 Symposium on Intelligent Information Systems
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Decision-making processes in pattern recognition (ACM monograph series)
Decision-making processes in pattern recognition (ACM monograph series)
YALE: rapid prototyping for complex data mining tasks
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Hi-index | 0.01 |
In order to classify an unseen (query) vector qwith the k-Nearest Neighbors method (k-NN) one computes a similarity function between qand training vectors in a database. In the basic variant of the k-NN algorithm the predicted class of qis estimated by taking the majority class of the q's k-nearest neighbors. Various similarity functions may be applied leading to different classification results. In this paper a heterogeneous similarity function is constructed out of different 1-component metrics by minimization of the number of classification errors the system makes on a training set. The HSFL-NN system, which has been introduced in this paper, on five tested datasets has given better results on unseen samples than the plain k-NN method with the optimally selected kparameter and the optimal homogeneous similarity function.