Synthesizing queries for handwritten word image retrieval
Pattern Recognition
Undoing the damage of dataset bias
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Discovering latent domains for multisource domain adaptation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Road scene segmentation from a single image
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VII
Recognizing materials from virtual examples
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
No bias left behind: covariate shift adaptation for discriminative 3d pose estimation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Data-driven vehicle identification by image matching
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume 2
Transfer discriminant-analysis of canonical correlations for view-transfer action recognition
PCM'12 Proceedings of the 13th Pacific-Rim conference on Advances in Multimedia Information Processing
Cross-Database transfer learning via learnable and discriminant error-correcting output codes
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
An adaptation framework for head-pose classification in dynamic multi-view scenarios
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
Undo the codebook bias by linear transformation for visual applications
Proceedings of the 21st ACM international conference on Multimedia
Learning person-specific models for facial expression and action unit recognition
Pattern Recognition Letters
Transfer learning with one-class data
Pattern Recognition Letters
Hi-index | 0.00 |
In real-world applications, "what you saw" during training is often not "what you get" during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks.