Denormalization and cross referencing in theoretical lexicography

  • Authors:
  • Joseph E. Grimes

  • Affiliations:
  • Cornell University, Ithaca, NY

  • Venue:
  • ACL '84 Proceedings of the 10th International Conference on Computational Linguistics and 22nd annual meeting on Association for Computational Linguistics
  • Year:
  • 1984

Quantified Score

Hi-index 0.00

Visualization

Abstract

A computational vehicle for lexicography was designed to keep to the constraints of meaning-text theory: sets of lexical correlates, limits on the form of definitions, and argument relations similar to lexical-functional grammar.Relational data bases look like a natural framework for this. But linguists operate with a non-normalized view. Mappings between semantic actants and grammatical relations do not fit actant fields uniquely. Lexical correlates and examples are polyvalent, hence denormalized.Cross referencing routines help the lexicographer work toward a closure state in which every term of a definition traces back to zero level terms defined extralinguistically or circularly. Dummy entries produced from defining terms ensure no trace is overlooked. Values of lexical correlates lead to other word senses. Cross references for glosses produce an indexed unilingual dictionary, the start of a fully bilingual one.To assist field work a small structured editor for a systematically denormalized data base was implemented in PTP under RT-11; Mumps would now be easier to implement on small machines. It allowed fields to be repeated and nonatomic strings included, and produced cross reference entries. It served for a monograph on a language of Mexico and for student projects from Africa and Asia.1