A research on a defending policy against the webcrawler's attack

  • Authors:
  • Wei Tong;Xiaoyao Xie

  • Affiliations:
  • School of Computer Science and Technology, Guizhou University, Guiyang, China and Key Laboratory of Information and Computing Science of Guizhou Province, Guizhou Normal University, Guiyang, China;Key Laboratory of Information and Computing Science of Guizhou Province, Guizhou Normal University, Guiyang, China and School of Computer Science and Technology, Guizhou University, Guiyang, China

  • Venue:
  • ASID'09 Proceedings of the 3rd international conference on Anti-Counterfeiting, security, and identification in communication
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the increasing of the amount of Internet information, there are different kinds of web crawlers fetching information from websites at anytime, anywhere, some of which are fetching information normally and some are attacking websites based upon application level and then causing the breakdown of servers, For a website, how to distinguish different kinds of crawlers effectively and accurately prevent bad crawlers' attack, or decrease crawlers' burden on the website during the rush hour becomes an important case, This paper based on a deep analysis of web crawler' s fetching process, summarized the web crawlers' threats on the security of websites and then Provide a trap policy and defend way based on robots.txt and the flux monitor, at last it is proved to be an effective strategy and method to protest the web site though experiments.