Matrix analysis
Topics in matrix analysis
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
SIAM Review
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Bayesian Classification With Gaussian Processes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Kernel Matrix Completion by Semidefinite Programming
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Advanced lectures on machine learning
The em algorithm for kernel matrix completion with auxiliary data
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
The evidence framework applied to support vector machines
IEEE Transactions on Neural Networks
Learning a kernel function for classification with small training samples
ICML '06 Proceedings of the 23rd international conference on Machine learning
A transductive framework of distance metric learning by spectral dimensionality reduction
Proceedings of the 24th international conference on Machine learning
A kernel path algorithm for support vector machines
Proceedings of the 24th international conference on Machine learning
Optimising multiple kernels for SVM by genetic programming
EvoCOP'08 Proceedings of the 8th European conference on Evolutionary computation in combinatorial optimization
Hi-index | 0.00 |
This paper addresses the problem of transductive learning of the kernel matrix from a probabilistic perspective. We define the kernel matrix as a Wishart process prior and construct a hierarchical generative model for kernel matrix learning. Specifically, we consider the target kernel matrix as a random matrix following the Wishart distribution with a positive definite parameter matrix and a degree of freedom. This parameter matrix, in turn, has the inverted Wishart distribution (with a positive definite hyperparameter matrix) as its conjugate prior and the degree of freedom is equal to the dimensionality of the feature space induced by the target kernel. Resorting to a missing data problem, we devise an expectation-maximization (EM) algorithm to infer the missing data, parameter matrix and feature dimensionality in a maximum a posteriori (MAP) manner. Using different settings for the target kernel and hyperparameter matrices, our model can be applied to different types of learning problems. In particular, we consider its application in a semi-supervised learning setting and present two classification methods. Classification experiments are reported on some benchmark data sets with encouraging results. In addition, we also devise the EM algorithm for kernel matrix completion.