Discovering latent topical structure by second-order similarity analysis

  • Authors:
  • Timothy Cribbin

  • Affiliations:
  • Department of Information Systems and Computing, Brunel University, Uxbridge. UK. UB8 3PH

  • Venue:
  • Journal of the American Society for Information Science and Technology
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computing document similarity directly from a “bag of words” vector space model can be problematic because term independence causes the relationships between synonymous terms and the contextual influences that determine the sense of polysemous terms to be ignored. This study compares two methods that potentially address these problems by deriving the higher order relationships that lie latent within the original first-order space. The first is latent semantic analysis (LSA), a dimension reduction method that is a well-known means of addressing the vocabulary mismatch problem in information retrieval systems. The second is the lesser known yet conceptually simple approach of second-order similarity (SOS) analysis, whereby latent similarity is measured in terms of mutual first-order similarity. Nearest neighbour tests show that SOS analysis derives similarity models that are superior to both first-order and LSA-derived models at both coarse and fine levels of semantic granularity. SOS analysis has been criticized for its computational complexity. A second contribution is the novel application of vector truncation to reduce run-time by a constant factor. Speed-ups of 4 to 10 times are achievable without compromising the structural gains achieved by full-vector SOS analysis. © 2011 Wiley Periodicals, Inc.