Design of CORE: context ontology rule enhanced focused web crawler
Proceedings of the International Conference on Advances in Computing, Communication and Control
PIDALION: a reconfigurable agent-based multimedia search engine platform
MAMECTIS'08 Proceedings of the 10th WSEAS international conference on Mathematical methods, computational techniques and intelligent systems
State of the Art in Semantic Focused Crawlers
ICCSA '09 Proceedings of the International Conference on Computational Science and Its Applications: Part II
OntoCrawler: A focused crawler with ontology-supported website models for information agents
Expert Systems with Applications: An International Journal
A framework for discovering and classifying ubiquitous services in digital health ecosystems
Journal of Computer and System Sciences
Hi-index | 0.00 |
The traditional process of focused web crawler is to harvest a collection of web documents that are focused on the topical subspaces. The intricacy of focused crawlers is identifying the next most important and relevant link to follow. Focused Crawlers mostly rely on probabilistic models for predicting the relevancy of the documents. The Web documents are well characterized by the hypertext and the hypertext can be used to determine the relevance of the document to the search domain. The semantics of the link characterizes the semantics of the document referred. In this article, a novel, and distinctive focused crawler named LSCrawler has been proposed. This LSCrawler system retrieves documents by speculating the relevancy of the document based on the keywords in the link and the surrounding text of the link. The relevancy of the documents is reckoned measuring the semantic similarity between the keywords in the link and the taxonomy hierarchy of the specific domain. The system exhibits better recall as it exploits the semantic of the keywords in the link.