Extracting Knowledge from Wikipedia Articles through Distributed Semantic Analysis

  • Authors:
  • Nguyen Trung Hieu;Mario Di Francesco;Antti Ylä-Jääski

  • Affiliations:
  • Department of Computer Science and Engineering, Aalto University, Finland;Department of Computer Science and Engineering, Aalto University, Finland;Department of Computer Science and Engineering, Aalto University, Finland

  • Venue:
  • Proceedings of the 13th International Conference on Knowledge Management and Knowledge Technologies
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computing semantic word similarity and relatedness requires access to vast amounts of semantic space for effective analysis. As a consequence, it is time-consuming to extract useful information from a large amount of data on a single workstation. In this paper, we propose a system, called Distributed Semantic Analysis (DSA), that integrates a distributed-based approach with semantic analysis. DSA builds a list of concept vectors associated with each word by exploiting the knowledge provided by Wikipedia articles. Based on such lists, DSA calculates the degree of semantic relatedness between two words through the cosine measure. The proposed solution is built on top of the Hadoop MapReduce framework and the Mahout machine learning library. Experimental results show two major improvements over the state of the art, with particular reference to the Explicit Semantic Analysis method. First, our distributed approach significantly reduces the computation time to build the concept vectors, thus enabling the use of larger inputs that is the basis for more accurate results. Second, DSA obtains a very high correlation of computed relatedness with reference benchmarks derived by human judgements. Moreover, its accuracy is higher than solutions reported in the literature over multiple benchmarks.