Web document text and images extraction using DOM analysis and natural language processing

  • Authors:
  • Parag Mulendra Joshi;Sam Liu

  • Affiliations:
  • Hewlett-Packard Laboratories, Palo Alto, CA, USA;Hewlett-Packard Laboratories, Palo Alto, CA, USA

  • Venue:
  • Proceedings of the 9th ACM symposium on Document engineering
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Web has emerged as the most important source of information in the world. This has resulted in need for automated software components to analyze web pages and harvest useful information from them. However, in typical web pages the informative content is surrounded by a very high degree of noise in the form of advertisements, navigation bars, links to other content, etc. Often the noisy content is interspersed with the main content leaving no clean boundaries between them. This noisy content makes the problem of information harvesting from web pages much harder. Therefore, it is essential to be able to identify main content of a web page and automatically isolate it from noisy content for any further analysis. Most existing approaches rely on prior knowledge of website specific templates and hand-crafted rules specific to websites for extraction of relevant content. We propose a generic approach that does not require prior knowledge of website templates. While HTML DOM analysis and visual layout analysis approaches have sometimes been used, we believe that for higher accuracy in content extraction, the analyzing software needs to mimic a human user and understand content in natural language similar to the way humans intuitively do in order to eliminate noisy content. In this paper, we describe a combination of HTML DOM analysis and Natural Language Processing (NLP) techniques for automated extractions of main article with associated images from web pages.