Filtering artificial texts with statistical machine learning techniques

  • Authors:
  • Thomas Lavergne;Tanguy Urvoy;François Yvon

  • Affiliations:
  • Orange Labs, Lannion, France and Telecom ParisTech, Paris, France;Orange Labs, Lannion, France;Univ Paris Sud 11 & LIMSI/CNRS, Orsay cedex, France

  • Venue:
  • Language Resources and Evaluation
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Fake content is flourishing on the Internet, ranging from basic random word salads to web scraping. Most of this fake content is generated for the purpose of nourishing fake web sites aimed at biasing search engine indexes: at the scale of a search engine, using automatically generated texts render such sites harder to detect than using copies of existing pages. In this paper, we present three methods aimed at distinguishing natural texts from artificially generated ones: the first method uses basic lexicometric features, the second one uses standard language models and the third one is based on a relative entropy measure which captures short range dependencies between words. Our experiments show that lexicometric features and language models are efficient to detect most generated texts, but fail to detect texts that are generated with high order Markov models. By comparison our relative entropy scoring algorithm, especially when trained on a large corpus, allows us to detect these "hard" text generators with a high degree of accuracy.