A generalized kernel approach to dissimilarity-based classification
The Journal of Machine Learning Research
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Cluster Analysis for Gene Expression Data: A Survey
IEEE Transactions on Knowledge and Data Engineering
Formulating distance functions via the kernel trick
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Learning the Kernel with Hyperkernels
The Journal of Machine Learning Research
Bioinformatics and Computational Biology Solutions Using R and Bioconductor (Statistics for Biology and Health)
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
Discovering rules-based similarity in microarray data
IPMU'10 Proceedings of the Computational intelligence for knowledge-based systems design, and 13th international conference on Information processing and management of uncertainty
Dynamic rule-based similarity model for DNA microarray data
Transactions on Rough Sets XV
Hi-index | 0.00 |
The k Nearest Neighbor classifier has been applied to the identification of cancer samples using the gene expression profiles with encouraging results. However, k -NN relies usually on the use of Euclidean distances that fail often to reflect accurately the sample proximities. Non Euclidean dissimilarities focus on different features of the data and should be integrated in order to reduce the misclassification errors. In this paper, we learn a linear combination of dissimilarities using a regularized kernel alignment algorithm. The weights of the combination are learnt in a HRKHS (Hyper Reproducing Kernel Hilbert Space) using a Semidefinite Programming algorithm. This approach allow us to incorporate a smoothing term that penalizes the complexity of the family of distances and avoids overfitting. The experimental results suggest that the method proposed outperforms other metric learning strategies and improves the classical k -NN algorithm based on a single dissimilarity.