WordNet: a lexical database for English
Communications of the ACM
Knowledge lean word-sense disambiguation
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Foundations of statistical natural language processing
Foundations of statistical natural language processing
SIGDOC '86 Proceedings of the 5th annual international conference on Systems documentation
An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet
CICLing '02 Proceedings of the Third International Conference on Computational Linguistics and Intelligent Text Processing
Automatic word sense discrimination
Computational Linguistics - Special issue on word sense disambiguation
Word-sense disambiguation using decomposable models
ACL '94 Proceedings of the 32nd annual meeting on Association for Computational Linguistics
Name disambiguation in author citations using a K-way spectral clustering method
Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries
Corpus-based statistical sense resolution
HLT '93 Proceedings of the workshop on Human Language Technology
Word Sense Disambiguation: Algorithms and Applications (Text, Speech and Language Technology)
Word Sense Disambiguation: Algorithms and Applications (Text, Speech and Language Technology)
A tutorial on spectral clustering
Statistics and Computing
Optimal construction of k-nearest-neighbor graphs for identifying noisy clusters
Theoretical Computer Science
Adjective Sense Disambiguation at the Border Between Unsupervised and Knowledge-Based Techniques
Fundamenta Informaticae
Extended gloss overlaps as a measure of semantic relatedness
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Artificial Intelligence Review
Word sense disambiguation as a traveling salesman problem
Artificial Intelligence Review
Hi-index | 0.00 |
This paper ultimately discusses the importance of the clustering method used in unsupervised word sense disambiguation. It illustrates the fact that a powerful clustering technique can make up for lack of external knowledge of all types. It argues that feature selection does not always improve disambiguation results, especially when using an advanced, state of the art method, hereby exemplified by spectral clustering. Disambiguation results obtained when using spectral clustering in the case of the main parts of speech (nouns, adjectives, verbs) are compared to those of the classical clustering method given by the Naïve Bayes model. In the case of unsupervised word sense disambiguation with an underlying Naïve Bayes model feature selection performed in two completely different ways is surveyed. The type of feature selection providing the best results (WordNet-based feature selection) is equally being used in the case of spectral clustering. The conclusion is that spectral clustering without feature selection (but using its own feature weighting) produces superior disambiguation results in the case of all parts of speech.