Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Foundations of statistical natural language processing
Foundations of statistical natural language processing
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
PCFG models of linguistic tree representations
Computational Linguistics
Kernel conditional random fields: representation and clique selection
ICML '04 Proceedings of the twenty-first international conference on Machine learning
A support vector method for multivariate performance measures
ICML '05 Proceedings of the 22nd international conference on Machine learning
Multi-view discriminative sequential learning
ECML'05 Proceedings of the 16th European conference on Machine Learning
Transductive support vector machines for structured variables
Proceedings of the 24th international conference on Machine learning
Enhanced max margin learning on multimodal data mining in a multimedia database
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Transductive multi-label learning for video concept detection
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
Improving Transductive Support Vector Machine by Ensembling
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Keepin' it real: semi-supervised learning with realistic tuning
SemiSupLearn '09 Proceedings of the NAACL HLT 2009 Workshop on Semi-Supervised Learning for Natural Language Processing
Maximum Entropy Discrimination Markov Networks
The Journal of Machine Learning Research
Coarse-to-fine boundary location with a SOM-like method
IEEE Transactions on Neural Networks
Efficient graph-based semi-supervised learning of structured tagging models
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
A transductive multi-label learning approach for video concept detection
Pattern Recognition
Aspects of semi-supervised and active learning in conditional random fields
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Multi-view prediction of protein function
Proceedings of the 2nd ACM Conference on Bioinformatics, Computational Biology and Biomedicine
Iterative refinement of HMM and HCRF for sequence classification
PSL'11 Proceedings of the First IAPR TC3 conference on Partially Supervised Learning
Improved parsing and POS tagging using inter-sentence consistency constraints
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
A hybrid generative/discriminative method for semi-supervised classification
Knowledge-Based Systems
Hi-index | 0.00 |
The problem of learning a mapping between input and structured, interdependent output variables covers sequential, spatial, and relational learning as well as predicting recursive structures. Joint feature representations of the input and output variables have paved the way to leveraging discriminative learners such as SVMs to this class of problems. We address the problem of semi-supervised learning in joint input output spaces. The co-training approach is based on the principle of maximizing the consensus among multiple independent hypotheses; we develop this principle into a semi-supervised support vector learning algorithm for joint input output spaces and arbitrary loss functions. Experiments investigate the benefit of semi-supervised structured models in terms of accuracy and F1 score.