Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Machine Learning
Unsupervised word sense disambiguation rivaling supervised methods
ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics
Word translation disambiguation using Bilingual Bootstrapping
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Word sense disambiguation by learning from unlabeled data
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Supporting Arabic cross-lingual retrieval using contextual information
IRFC'11 Proceedings of the Second international conference on Multidisciplinary information retrieval facility
A new fuzzy rule-based classification system for word sense disambiguation
Intelligent Data Analysis
Hi-index | 0.00 |
In this paper, we improve an unsupervised learning method using the Expectation-Maximization (EM) algorithm proposed by Nigam et al. for text classification problems in order to apply it to word sense disambiguation (WSD) problems. The improved method stops the EM algorithm at the optimum iteration number. To estimate that number, we propose two methods. In experiments, we solved 50 noun WSD problems in the Japanese Dictionary Task in SENSEVAL2. The score of our method is a match for the best public score of this task. Furthermore, our methods were confirmed to be effective also for verb WSD problems.