Machine Learning
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Kernel Principal Component Analysis
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Kernel k-means: spectral clustering and normalized cuts
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Diffusion Kernels on Statistical Manifolds
The Journal of Machine Learning Research
Boosting One-Class Support Vector Machines for Multi-Class Classification
Applied Artificial Intelligence
Spectral sampling of manifolds
ACM SIGGRAPH Asia 2010 papers
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Local isomorphism to solve the pre-image problem in kernel methods
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
The pre-image problem in kernel methods
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Using kernels to embed non linear data into high dimensional spaces where linear analysis is possible has become utterly classical. In the case of the Gaussian kernel however, data are distributed on a hypersphere in the corresponding Reproducing Kernel Hilbert Space (RKHS). Inspired by previous works in non-linear statistics, this article investigates the use of dedicated tools to take into account this particular geometry. Within this geometrical interpretation of the kernel theory, Riemannian distances are preferred over Euclidean distances. It is shown that this amounts to consider a new kernel and its corresponding RKHS. Experiments on real publicly available datasets show the possible benefits of the method on clustering tasks, notably through the definition of a new variant of kernel k-means on the hypersphere. Classification problems are also considered in a classwise setting. In both cases, the results show improvements over standard techniques.