Undoing the damage of dataset bias
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Discovering latent domains for multisource domain adaptation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Transfer discriminant-analysis of canonical correlations for view-transfer action recognition
PCM'12 Proceedings of the 13th Pacific-Rim conference on Advances in Multimedia Information Processing
FIDOS: A generalized Fisher based feature extraction method for domain shift
Pattern Recognition
Beyond dataset bias: multi-task unaligned shared knowledge transfer
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Linear cross-modal hashing for efficient multimedia search
Proceedings of the 21st ACM international conference on Multimedia
Undo the codebook bias by linear transformation for visual applications
Proceedings of the 21st ACM international conference on Multimedia
Discriminative feature selection for multi-view cross-domain learning
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Learning person-specific models for facial expression and action unit recognition
Pattern Recognition Letters
Transfer learning with one-class data
Pattern Recognition Letters
Hi-index | 0.00 |
Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.