Abstract:
A web crawler is a relatively simple automated program or script that methodically scans or “crawls” through Internet pages to retrieval information from data. Alternativ...Show MoreMetadata
Abstract:
A web crawler is a relatively simple automated program or script that methodically scans or “crawls” through Internet pages to retrieval information from data. Alternative names for a web crawler include web spider, web robot, bot, crawler, and automatic indexer. There are many different uses for a web crawler. Their primary purpose is to collect data so that when Internet surfers enter a search term on their site, they can quickly provide the surfer with relevant web sites. In this work we propose the model of a low cost web crawler for distributed environments based on an efficient URL assignment algorithm. The function of every module of the crawler is analyzed and main rules that crawlers must follow to maintain load balancing and robustness of system when they are searching on the web simultaneously, are discussed. The proposed a dynamic URL assignment method, based on grid computing technology and dynamic clustering, results efficient increasing web crawler performance.
Published in: 2010 IEEE International Conference on Computational Intelligence for Measurement Systems and Applications
Date of Conference: 06-08 September 2010
Date Added to IEEE Xplore: 28 October 2010
ISBN Information: