Term dependence: a basis for Luhn and Zipf models

  • Authors:
  • Robert M. Losee

  • Affiliations:
  • Univ. of North Carolina, Chapel Hill

  • Venue:
  • Journal of the American Society for Information Science and Technology
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

There are regularities in the statistical information provided by natural language terms about neighboring terms. We find that when phrase rank increases, moving from common to less common phrases, the value of the expected mutual information measure (EMIM) between the terms regularly decreases. Luhn's model suggests that midrange terms are the best index terms and relevance discriminators. We suggest reasons for this principle based on the empirical relationships shown here between the rank of terms within phrases and the average mutual information between terms, which we refer to as the Inverse Representation – EMIM principle. We also suggest an Inverse EMIM term weight for indexing or retrieval applications that is consistent with Luhn's distribution. An information theoretic interpretation of Zipf's Law is provided. Using the regularity noted here, we suggest that Zipf's Law is a consequence of the statistical dependencies that exist between terms, described here using information theoretic concepts.