Term-weighting approaches in automatic text retrieval
Information Processing and Management: an International Journal
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Domain adaptation with structural correspondence learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Online methods for multi-domain learning and adaptation
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Domain adaptation for statistical classifiers
Journal of Artificial Intelligence Research
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
The SHOGUN Machine Learning Toolbox
The Journal of Machine Learning Research
Transferable Discriminative Dimensionality Reduction
ICTAI '11 Proceedings of the 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence
A subject transfer framework for EEG classification
Neurocomputing
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
A single-domain, representation-learning model for big data classification of network intrusion
MLDM'13 Proceedings of the 9th international conference on Machine Learning and Data Mining in Pattern Recognition
Beyond cross-domain learning: Multiple-domain nonnegative matrix factorization
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
Recently, cross-domain learning has become one of the most important research directions in data mining and machine learning. In multi-domain learning, one problem is that the classification patterns and data distributions are different among domains, which leads to that the knowledge (e.g. classification hyperplane) can not be directly transferred from one domain to another. This paper proposes a framework to combine class-separate objectives (maximize separability among classes) and domain-merge objectives (minimize separability among domains) to achieve cross-domain representation learning. Three special methods called DMCS_CSF, DMCS_FDA and DMCS_PCDML upon this framework are given and the experimental results valid their effectiveness.