Latent variable models and factors analysis
Latent variable models and factors analysis
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Locality preserving indexing for document representation
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neighborhood Preserving Embedding
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
3D People Tracking with Gaussian Process Dynamical Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
IEEE Transactions on Pattern Analysis and Machine Intelligence
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Clustering with Bregman Divergences
The Journal of Machine Learning Research
Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
The Journal of Machine Learning Research
Boosting for transfer learning
Proceedings of the 24th international conference on Machine learning
Gaussian Process Dynamical Models for Human Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image categorization: Graph edit distance+edge direction histogram
Pattern Recognition
KPCA for semantic object extraction in images
Pattern Recognition
Fast nearest neighbor retrieval for bregman divergences
Proceedings of the 25th international conference on Machine learning
Scene segmentation based on IPCA for visual surveillance
Neurocomputing
Patch Alignment for Dimensionality Reduction
IEEE Transactions on Knowledge and Data Engineering
Multi-label dimensionality reduction via dependence maximization
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Binary sparse nonnegative matrix factorization
IEEE Transactions on Circuits and Systems for Video Technology
Graph embedding with constraints
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Deterministic Column-Based Matrix Decomposition
IEEE Transactions on Knowledge and Data Engineering
Laplacian regularized D-optimal design for active learning and its application to image retrieval
IEEE Transactions on Image Processing
Bregman Divergence-Based Regularization for Transfer Subspace Learning
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on gait analysis
Discriminant Locally Linear Embedding With High-Order Tensor Data
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Interactive cartoon reusing by transfer learning
Signal Processing
Dimensionality reduction with adaptive graph
Frontiers of Computer Science: Selected Publications from Chinese Universities
Hi-index | 0.01 |
Latent variable models are powerful dimensionality reduction approaches in machine learning and pattern recognition. However, this kind of methods only works well under a necessary and strict assumption that the training samples and testing samples are independent and identically distributed. When the samples come from different domains, the distribution of the testing dataset will not be identical with the training dataset. Therefore, the performance of latent variable models will be degraded for the reason that the parameters of the training model do not suit for the testing dataset. This case limits the generalization and application of the traditional latent variable models. To handle this issue, a transfer learning framework for latent variable model is proposed which can utilize the distance (or divergence) of the two datasets to modify the parameters of the obtained latent variable model. So we do not need to rebuild the model and only adjust the parameters according to the divergence, which will adopt different datasets. Experimental results on several real datasets demonstrate the advantages of the proposed framework.