Machine Learning
Database-friendly random projections: Johnson-Lindenstrauss with binary coins
Journal of Computer and System Sciences - Special issu on PODS 2001
Ho--Kashyap classifier with generalization control
Pattern Recognition Letters
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Generalized Low Rank Approximations of Matrices
Machine Learning
Optimal kernel selection in Kernel Fisher discriminant analysis
ICML '06 Proceedings of the 23rd international conference on Machine learning
Efficient text chunking using linear kernel with masked method
Knowledge-Based Systems
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
MultiK-MHKS: A Novel Multiple Kernel Learning Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Semi-supervised fuzzy clustering: A kernel-based approach
Knowledge-Based Systems
PSSP with dynamic weighted kernel fusion based on SVM-PHGS
Knowledge-Based Systems
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Learning ensemble classifiers via restricted Boltzmann machines
Pattern Recognition Letters
Hi-index | 0.00 |
In this paper we propose an effective and efficient random projection ensemble classifier with multiple empirical kernels. For the proposed classifier, we first randomly select a subset from the whole training set and use the subset to construct multiple kernel matrices with different kernels. Then through adopting the eigendecomposition of each kernel matrix, we explicitly map each sample into a feature space and apply the transformed sample into our previous multiple kernel learning framework. Finally, we repeat the above random selection for multiple times and develop a voting ensemble classifier, which is named RPEMEKL. The contributions of the proposed RPEMEKL are: (1) efficiently reducing the computational cost for the eigendecomposition of the kernel matrix due to the smaller size of the kernel matrix; (2) effectively increasing the classification performance due to the diversity generated through different random selections of the subsets; (3) giving an alternative multiple kernel learning from the Empirical Kernel Mapping (EKM) viewpoint, which is different from the traditional Implicit Kernel Mapping (IKM) learning.