IEEE Transactions on Pattern Analysis and Machine Intelligence
Convergence of a block coordinate descent method for nondifferentiable minimization
Journal of Optimization Theory and Applications
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neighborhood Preserving Embedding
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Effective and Efficient Dimensionality Reduction for Large-Scale and Streaming Data Preprocessing
IEEE Transactions on Knowledge and Data Engineering
Selection of the optimal parameter value for the Isomap algorithm
Pattern Recognition Letters
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 24th international conference on Machine learning
Graph construction and b-matching for semi-supervised learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Nonlinear Dimensionality Reduction
Nonlinear Dimensionality Reduction
Sparsity preserving projections with applications to face recognition
Pattern Recognition
Graph-optimized locality preserving projections
Pattern Recognition
Transfer latent variable model based on divergence analysis
Pattern Recognition
Hi-index | 0.00 |
Graph-based dimensionality reduction (DR) methods have been applied successfully in many practical problems, such as face recognition, where graphs play a crucial role in modeling the data distribution or structure. However, the ideal graph is, in practice, difficult to discover. Usually, one needs to construct graph empirically according to various motivations, priors, or assumptions; this is independent of the subsequent DR mapping calculation. Different from the previous works, in this paper, we attempt to learn a graph closely linked with the DR process, and propose an algorithm called dimensionality reduction with adaptive graph (DRAG), whose idea is to, during seeking projection matrix, simultaneously learn a graph in the neighborhood of a prespecified one. Moreover, the pre-specified graph is treated as a noisy observation of the ideal one, and the square Frobenius divergence is used to measure their difference in the objective function. As a result, we achieve an elegant graph update formula which naturally fuses the original and transformed data information. In particular, the optimal graph is shown to be a weighted sum of the pre-defined graph in the original space and a new graph depending on transformed space. Empirical results on several face datasets demonstrate the effectiveness of the proposed algorithm.