Representation of local geometry in the visual system
Biological Cybernetics
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Geometry-Driven Diffusion in Computer Vision
Geometry-Driven Diffusion in Computer Vision
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Subclass Discriminant Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Covariate Shift Adaptation by Importance Weighted Cross Validation
The Journal of Machine Learning Research
Transfer learning from multiple source domains via consensus regularization
Proceedings of the 17th ACM conference on Information and knowledge management
Dataset Shift in Machine Learning
Dataset Shift in Machine Learning
Discriminative Learning Under Covariate Shift
The Journal of Machine Learning Research
Adapting visual category models to new domains
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Transferable Discriminative Dimensionality Reduction
ICTAI '11 Proceedings of the 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence
Domain Transfer Multiple Kernel Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Domain Adaptation via Transfer Component Analysis
IEEE Transactions on Neural Networks
Domain adaptation for object recognition: An unsupervised approach
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.01 |
Traditional pattern recognition techniques often assume that the data sets used for training and testing follow the same distribution. However, this assumption is usually not true for many real world problems as data from the same classes but different domains, e.g., data are collected under different conditions, may show different characteristics. We introduce FIDOS, a generalized FIsher based method for DOmain Shift problem, that aims at learning invariant features across domains in a supervised manner. Different from classical Fisher feature extraction, FIDOS aims to minimize not only the within-class scatter but also the difference in distributions between domains. Therefore, the subspace constructed by FIDOS reduces the drift in distributions among different domains and at the same time preserves the discriminants across classes. Another advantage of FIDOS over classical Fisher is that FIDOS extracts more features when multiple source domains are available in the training set; this is essential for a good classification especially when the number of classes is small. Experimental results on both artificial and real data and comparisons with other methods demonstrate the efficiency of our method in classifying objects under domain shift situations.