Towards full automation of lexicon construction

  • Authors:
  • Richard Rohwer;Dayne Freitag

  • Affiliations:
  • Fair Isaac Corporation;Fair Isaac Corporation

  • Venue:
  • CLS '04 Proceedings of the HLT-NAACL Workshop on Computational Lexical Semantics
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe work in progress aimed at developing methods for automatically constructing a lexicon using only statistical data derived from analysis of corpora, a problem we call lexical optimization. Specifically, we use statistical methods alone to obtain information equivalent to syntactic categories, and to discover the semantically meaningful units of text, which may be multi-word units or polysemous terms-in-context. Our guiding principle is to employ a notion of "meaningfulness" that can be quantified information-theoretically, so that plausible variants of a lexicon can be judged relative to each other. We describe a technique of this nature called information theoretic co-clustering and give results of a series of experiments built around it that demonstrate the main ingredients of lexical optimization. We conclude by describing our plans for further improvements, and for applying the same mathematical principles to other problems in natural language processing.