Learning incoherent sparse and low-rank patterns from multiple tasks
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Discriminative codeword selection for image representation
Proceedings of the international conference on Multimedia
Variational inference with graph regularization for image annotation
ACM Transactions on Intelligent Systems and Technology (TIST)
Non-goal scene analysis for soccer video
Neurocomputing
Local Kernel Feature Analysis (LKFA) for object recognition
Neurocomputing
m-SNE: multiview stochastic neighbor embedding
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Backward-forward least angle shrinkage for sparse quadratic optimization
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Random projection tree and multiview embedding for large-scale image retrieval
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Feature level analysis for 3D facial expression recognition
Neurocomputing
Orthogonal Complete Discriminant Locality Preserving Projections for Face Recognition
Neural Processing Letters
Transfer latent variable model based on divergence analysis
Pattern Recognition
A bayesian framework for learning shared and individual subspaces from multiple data sources
PAKDD'11 Proceedings of the 15th Pacific-Asia conference on Advances in knowledge discovery and data mining - Volume Part I
Face Recognition Using Kernel UDP
Neural Processing Letters
Optimized cluster-based filtering algorithm for graph metadata
Information Sciences: an International Journal
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
ACM Transactions on Knowledge Discovery from Data (TKDD)
Social image annotation via cross-domain subspace learning
Multimedia Tools and Applications
Dimensionality reduction by Mixed Kernel Canonical Correlation Analysis
Pattern Recognition
Interactive cartoon reusing by transfer learning
Signal Processing
Visual query processing for efficient image retrieval using a SOM-based filter-refinement scheme
Information Sciences: an International Journal
Sparse transfer learning for interactive video search reranking
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Kinship verification through transfer learning
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Self-taught dimensionality reduction on the high-dimensional small-sized data
Pattern Recognition
Active SVM-based relevance feedback using multiple classifiers ensemble and features reweighting
Engineering Applications of Artificial Intelligence
Fast reduction of speckle noise in real ultrasound images
Signal Processing
Regularized nonnegative shared subspace learning
Data Mining and Knowledge Discovery
Adaptive object detection by implicit sub-class sharing features
Signal Processing
Local discriminative distance metrics ensemble learning
Pattern Recognition
Facial expression recognition based on Hessian regularized support vector machine
Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service
G-Optimal Feature Selection with Laplacian regularization
Neurocomputing
Learning person-specific models for facial expression and action unit recognition
Pattern Recognition Letters
Similar handwritten Chinese character recognition by kernel discriminative locality alignment
Pattern Recognition Letters
Transfer learning with one-class data
Pattern Recognition Letters
Hi-index | 0.01 |
The regularization principals [CHECK END OF SENTENCE] lead approximation schemes to deal with various learning problems, e.g., the regularization of the norm in a reproducing kernel Hilbert space for the ill-posed problem. In this paper, we present a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples. In particular, the new regularization minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace, so it boosts the performance when training and testing samples are not independent and identically distributed. To test the effectiveness of the proposed regularization, we introduce it to popular subspace learning algorithms, e.g., principal components analysis (PCA) for cross-domain face modeling; and Fisher's linear discriminant analysis (FLDA), locality preserving projections (LPP), marginal Fisher's analysis (MFA), and discriminative locality alignment (DLA) for cross-domain face recognition and text categorization. Finally, we present experimental evidence on both face image data sets and text data sets, suggesting that the proposed Bregman divergence-based regularization is effective to deal with cross-domain learning problems.