Word association norms, mutual information, and lexicography
Computational Linguistics
Using WordNet and Lexical Operators to Improve Internet Searches
IEEE Internet Computing
Choosing the word most typical in context using a lexical co-occurrence network
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Automatic retrieval and clustering of similar words
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
Lexical substitution as a task for WSD evaluation
WSD '02 Proceedings of the ACL-02 workshop on Word sense disambiguation: recent successes and future directions - Volume 8
Building and Using a Lexical Knowledge Base of Near-Synonym Differences
Computational Linguistics
Direct word sense matching for lexical substitution
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
A review of ontology based query expansion
Information Processing and Management: an International Journal
OntoNotes: A Unified Relational Semantic Representation
ICSC '07 Proceedings of the International Conference on Semantic Computing
Discriminative Training of the Hidden Vector State Model for Semantic Parsing
IEEE Transactions on Knowledge and Data Engineering
NAACL-Short '06 Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers
Independent component analysis for near-synonym choice
Decision Support Systems
Hi-index | 0.00 |
Near-synonyms are useful knowledge resources for many natural language applications such as query expansion for information retrieval (IR) and paraphrasing for text generation. However, near-synonyms are not necessarily interchangeable in contexts due to their specific usage and syntactic constraints. Accordingly, it is worth to develop algorithms to verify whether near-synonyms do match the given contexts. In this paper, we consider the near-synonym substitution task as a classification task, where a classifier is trained for each near-synonym set to classify test examples into one of the near-synonyms in the set. We also propose the use of discriminative training to improve classifiers by distinguishing positive and negative features for each near-synonym. Experimental results show that the proposed method achieves higher accuracy than both pointwise mutual information (PMI) and n-gram-based methods that have been used in previous studies.