Acquiring thesauri from wikis by exploiting domain models and lexical substitution

  • Authors:
  • Claudio Giuliano;Alfio Massimiliano Gliozzo;Aldo Gangemi;Kateryna Tymoshenko

  • Affiliations:
  • FBK, Trento, Italy;STLab-CNR, Rome, (RM), Italy;STLab-CNR, Rome, (RM), Italy;FBK, Trento, Italy

  • Venue:
  • ESWC'10 Proceedings of the 7th international conference on The Semantic Web: research and Applications - Volume Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Acquiring structured data from wikis is a problem of increasing interest in knowledge engineering and Semantic Web. In fact, collaboratively developed resources are growing in time, have high quality and are constantly updated. Among these problems, an area of interest is extracting thesauri from wikis. A thesaurus is a resource that lists words grouped together according to similarity of meaning, generally organized into sets of synonyms. Thesauri are useful for a large variety of applications, including information retrieval and knowledge engineering. Most information in wikis is expressed by means of natural language texts and internal links among Web pages, the so-called wikilinks. In this paper, an innovative method for inducing thesauri from Wikipedia is presented. It leverages on the Wikipedia structure to extract concepts and terms denoting them, obtaining a thesaurus that can be profitably used into applications. This method boosts sensibly precision and recall if applied to re-rank a state-of-the-art baseline approach. Finally, we discuss how to represent the extracted results in RDF/OWL, with respect to existing good practices.