Controlling the robots of Web search engines
Proceedings of the 2001 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Optimal multithreshold control for a BMAP/G/1 queue with N service modes
Queueing Systems: Theory and Applications
On approximating higher order MAPs with MAPs of order two
Queueing Systems: Theory and Applications
Lack of Invariant Property of the Erlang Loss Model in Case of MAP Input
Queueing Systems: Theory and Applications
Multi-server retrial model with variable number of active servers
Computers and Industrial Engineering
Optimal admission control in a queueing system with heterogeneous traffic
Operations Research Letters
Semantic ranking of web pages based on formal concept analysis
Journal of Systems and Software
Hi-index | 0.00 |
A typical web search engine consists of three principal parts: crawling engine, indexing engine, and searching engine. The present work aims to optimize the performance of the crawling engine. The crawling engine finds new web pages and updates web pages existing in the database of the web search engine. The crawling engine has several robots collecting information from the Internet. We first calculate various performance measures of the system (e.g., probability of arbitrary page loss due to the buffer overflow, probability of starvation of the system, the average time waiting in the buffer). Intuitively, we would like to avoid system starvation and at the same time to minimize the information loss. We formulate the problem as a multi-criteria optimization problem and attributing a weight to each criterion. We solve it in the class of threshold policies. We consider a very general web page arrival process modeled by Batch Marked Markov Arrival Process and a very general service time modeled by Phase-type distribution. The model has been applied to the performance evaluation and optimization of the crawler designed by INRIA Maestro team in the framework of the RIAM INRIA-Canon research project.