Machine reading: from wikipedia to the web

  • Authors:
  • Daniel S. Weld;Fei Wu

  • Affiliations:
  • University of Washington;University of Washington

  • Venue:
  • Machine reading: from wikipedia to the web
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Berners-Lee's compelling vision of a Semantic Web is hindered by a chicken-egg problem, which can be best solved via machine reading — automatically extracting information from natural-language texts to make them accessible to software agents. We argue bootstrapping is the best way to build such a system. We choose Wikipedia as an initial data source, because it is comprehensive, high-quality, and contains enough collaboratively-created structure to launch a self-supervised bootstrapping process. We have developed three systems that realize our vision: • KYLIN, which applies Wikipedia heuristic of matching sentences with infoboxes to create training examples for learning relation-specific extractors. • KOG, which automatically generates Wikipedia Infobox Ontology by integrating evidence from heterogeneous resources via joint inference using Markov Logic Networks. • WOE, which uses Wikipedia heuristic to create matching sentence set as done in KYLIN, but it abstracts these examples to relation-independent training data to learn an unlexicalized open extractor. The results of our experiments show that these automatically learned systems can render much of Wikipedia into high-quality semantic data, which provides a solid base to bootstrap toward the general Web.