Domain-specific semantic relatedness from Wikipedia: can a course be transferred?
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
Hi-index | 0.00 |
Berners-Lee's compelling vision of a Semantic Web is hindered by a chicken-egg problem, which can be best solved via machine reading — automatically extracting information from natural-language texts to make them accessible to software agents. We argue bootstrapping is the best way to build such a system. We choose Wikipedia as an initial data source, because it is comprehensive, high-quality, and contains enough collaboratively-created structure to launch a self-supervised bootstrapping process. We have developed three systems that realize our vision: • KYLIN, which applies Wikipedia heuristic of matching sentences with infoboxes to create training examples for learning relation-specific extractors. • KOG, which automatically generates Wikipedia Infobox Ontology by integrating evidence from heterogeneous resources via joint inference using Markov Logic Networks. • WOE, which uses Wikipedia heuristic to create matching sentence set as done in KYLIN, but it abstracts these examples to relation-independent training data to learn an unlexicalized open extractor. The results of our experiments show that these automatically learned systems can render much of Wikipedia into high-quality semantic data, which provides a solid base to bootstrap toward the general Web.