The nature of statistical learning theory
The nature of statistical learning theory
Mixtures of probabilistic principal component analyzers
Neural Computation
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
The support vector decomposition machine
ICML '06 Proceedings of the 23rd international conference on Machine learning
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis
The Journal of Machine Learning Research
Proceedings of the 24th international conference on Machine learning
Localized multiple kernel learning
Proceedings of the 25th international conference on Machine learning
Geometric Mean for Subspace Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local linear transformation embedding
Neurocomputing
Discriminant Locally Linear Embedding With High-Order Tensor Data
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Localized algorithms for multiple kernel learning
Pattern Recognition
Pattern Recognition Letters
Hi-index | 0.01 |
We formulate a supervised, localized dimensionality reduction method using a gating model that divides up the input space into regions and selects the dimensionality reduction projection separately in each region. The gating model, the locally linear projections, and the kernel-based supervised learning algorithm which uses them in its kernels are coupled and their training is performed with an alternating optimization procedure. Our proposed local projection kernel projects a data instance into different feature spaces by using the local projection matrices, combines them with the gating model, and performs the dot product in the combined feature space. Empirical results on benchmark data sets for visualization and classification tasks validate the idea. The method is generalizable to regression estimation and novelty detection.