Communications of the ACM - Special issue on parallelism
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Incremental, instance-based learning of independent and graded concept descriptions
Proceedings of the sixth international workshop on Machine learning
Instance-Based Learning Algorithms
Machine Learning
First leaves: a tutorial introduction to Maple V
First leaves: a tutorial introduction to Maple V
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
An Average-Case Analysis of k-Nearest Neighbor Classifier
ICCBR '95 Proceedings of the First International Conference on Case-Based Reasoning Research and Development
ICCBR '95 Proceedings of the First International Conference on Case-Based Reasoning Research and Development
Linguini: language identification for multilingual documents
Journal of Management Information Systems - Special section: Exploring the outlands of the MIS discipline
Setting attribute weights for k-NN based binary classification via quadratic programming
Intelligent Data Analysis
AISIID: An artificial immune system for interesting information discovery on the web
Applied Soft Computing
Malay language document identification using BPNN
NN'09 Proceedings of the 10th WSEAS international conference on Neural networks
Malay document analysis and recognition
WSEAS Transactions on Information Science and Applications
Partial and vague knowledge for similarity measures
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Knowledge-Rich similarity-based classification
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
Hi-index | 0.00 |
Nearest neighbor (NN) learning algorithms, examples of the lazy learningparadigm, rely on a distance function to measure the similarity of testingexamples with the stored training examples. Since certain attributes are morediscriminative, while others can be less or totally irrelevant, attributesshould be weighed differently in the distance function. Most previous studieson weight setting for NN learning algorithms are empirical. In this paper wedescribe our attempt on deciding theoretically optimal weights that minimizethe predictive error for NN algorithms. Assuming a uniform distribution ofexamples in a 2-d continuous space, we first derive the average predictiveerror introduced by a linear classification boundary, and then determine theoptimal weight setting for any polygonal classification region. Our theoreticalresults of optimal attribute weights can serve as a baseline or lower bound forcomparing other empirical weight setting methods.