Original Contribution: Stacked generalization
Neural Networks
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Machine Learning
Exploiting unlabeled data in ensemble methods
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Tri-Training: Exploiting Unlabeled Data Using Three Classifiers
IEEE Transactions on Knowledge and Data Engineering
Large scale semi-supervised linear SVMs
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Non-projective dependency parsing using spanning tree algorithms
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Effective self-training for parsing
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Semisupervised Learning for Computational Linguistics
Semisupervised Learning for Computational Linguistics
When Semi-supervised Learning Meets Ensemble Learning
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
CoNLL-X shared task on multilingual dependency parsing
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
Data-driven dependency parsing of new languages using incomplete and noisy training data
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Parser combination by reparsing
NAACL-Short '06 Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers
Improving parsing accuracy by combining diverse dependency parsers
Parsing '05 Proceedings of the Ninth International Workshop on Parsing Technology
IWPT '09 Proceedings of the 11th International Conference on Parsing Technologies
Dependency parsing with energy-based reinforcement learning
IWPT '09 Proceedings of the 11th International Conference on Parsing Technologies
Ensemble models for dependency parsing: cheap and good?
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Chinese chunking with tri-training learning
ICCPOL'06 Proceedings of the 21st international conference on Computer Processing of Oriental Languages: beyond the orient: the research challenges ahead
Using co-training and self-training in semi-supervised multiple classifier systems
SSPR'06/SPR'06 Proceedings of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Improve Computer-Aided Diagnosis With Machine Learning Techniques Using Undiagnosed Samples
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
Martins et al. (2008) presented what to the best of our knowledge still ranks as the best overall result on the CONLL-X Shared Task datasets. The paper shows how triads of stacked dependency parsers described in Martins et al. (2008) can label unlabeled data for each other in a way similar to co-training and produce end parsers that are significantly better than any of the stacked input parsers. We evaluate our system on five datasets from the CONLL-X Shared Task and obtain 10--20% error reductions, incl. the best reported results on four of them. We compare our approach to other semi-supervised learning algorithms.