A cost-effective method for detecting web site replicas on search engine databases

  • Authors:
  • André Luiz da Costa Carvalho;Edleno Silva de Moura;Altigran Soares da Silva;Klessius Berlt;Allan Bezerra

  • Affiliations:
  • Federal University of Amazonas, Computer Science Department, Av. Rodrigo Octávio, Ramos 3000, Manaus, Brazil;Federal University of Amazonas, Computer Science Department, Av. Rodrigo Octávio, Ramos 3000, Manaus, Brazil;Federal University of Amazonas, Computer Science Department, Av. Rodrigo Octávio, Ramos 3000, Manaus, Brazil;Federal University of Amazonas, Computer Science Department, Av. Rodrigo Octávio, Ramos 3000, Manaus, Brazil;Federal University of Amazonas, Computer Science Department, Av. Rodrigo Octávio, Ramos 3000, Manaus, Brazil

  • Venue:
  • Data & Knowledge Engineering
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Identifying replicated sites is an important task for search engines. It can reduce data storage costs, improve query processing time and remove noise that might affect the quality of the final answers given to the user. This paper introduces a new approach to detect web sites that are likely to be replicas in a search engine database. Our method uses the websites' structure and the content of their pages to identify possible replicas. As we show through experiments, such a combination improves the precision and reduces the overall costs related to the replica detection task. Our method achieves a quality improvement of 47.23% when compared to previously proposed approaches.