The nature of statistical learning theory
The nature of statistical learning theory
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Pattern Recognition Letters
K-T.R.A.C.E: A kernel k-means procedure for classification
Computers and Operations Research
Fast nonnegative matrix factorization and its application for protein fold recognition
EURASIP Journal on Applied Signal Processing
A survey on the application of genetic programming to classification
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Soft Nearest Convex Hull Classifier
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Prototype reduction techniques: A comparison among different approaches
Expert Systems with Applications: An International Journal
An experimental evaluation of linear and kernel-based classifiers for face recognition
ISNN'05 Proceedings of the Second international conference on Advances in neural networks - Volume Part II
Kernel sparse representation based classification
Neurocomputing
Genetic programming for kernel-based learning with co-evolving subsets selection
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
WIRN'05 Proceedings of the 16th Italian conference on Neural Nets
Boosting k-NN for Categorization of Natural Scenes
International Journal of Computer Vision
Translation Invariance in the Polynomial Kernel Space and Its Applications in kNN Classification
Neural Processing Letters
Off-line hand written input based identity determination using multi kernel feature combination
Pattern Recognition Letters
Hi-index | 0.00 |
The ‘kernel approach’ has attracted great attention with the development of support vector machine (SVM) and has been studied in a general way. It offers an alternative solution to increase the computational power of linear learning machines by mapping data into a high dimensional feature space. This ‘approach’ is extended to the well-known nearest-neighbor algorithm in this paper. It can be realized by substitution of a kernel distance metric for the original one in Hilbert space, and the corresponding algorithm is called kernel nearest-neighbor algorithm. Three data sets, an artificial data set, BUPA liver disorders database and USPS database, were used for testing. Kernel nearest-neighbor algorithm was compared with conventional nearest-neighbor algorithm and SVM Experiments show that kernel nearest-neighbor algorithm is more powerful than conventional nearest-neighbor algorithm, and it can compete with SVM.