Communications of the ACM - Special issue on parallelism
A Nearest Hyperrectangle Learning Method
Machine Learning
Trading MIPS and memory for knowledge engineering
Communications of the ACM
Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms
International Journal of Man-Machine Studies - Special issue: symbolic problem solving in noisy and novel task environments
Estimating attributes: analysis and extensions of RELIEF
ECML-94 Proceedings of the European conference on machine learning on Machine Learning
Machine Learning
Similarity metric learning for a variable-kernel classifier
Neural Computation
Unifying instance-based and rule-based induction
Machine Learning
Computing Optimal Attribute Weight Settings for Nearest NeighborAlgorithms
Artificial Intelligence Review - Special issue on lazy learning
Artificial Intelligence Review - Special issue on lazy learning
General convergence results for linear discriminant updates
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Lazy learning
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Introduction to Linear Optimization
Introduction to Linear Optimization
Concept acquisition through representational adjustment
Concept acquisition through representational adjustment
Hi-index | 0.00 |
The k-Nearest Neighbour (k-NN) method is a typical lazy learning paradigm for solving classification problems. Although this method was originally proposed as a non-parameterised method, attribute weight setting has been commonly adopted to deal with irrelevant attributes. In this paper, we propose a new attribute weight setting method for k-NN based classifiers using quadratic programming, which is particularly suitable for binary classification problems. Our method formalises the attribute weight setting problem as a quadratic programming problem and exploits commercial software to calculate attribute weights. To evaluate our method, we carried out a series of experiments on six established data sets. Experiments show that our method is quite practical for various problems and can achieve a stable increase in accuracy over the standard k-NN method as well as a competitive performance. Another merit of the method is that it can use small training sets.