dcrawl: multi-threaded web crawler
dcrawl
dcrawl is a simple, but smart, multi-threaded web crawler for randomly gathering huge lists of unique domain names.
How does it work?
dcrawl takes one site URL as input and detects all <a href=…> links in the site’s body. Each found the link is put into the queue. Successively, each queued link is crawled in the same way, branching out to more URLs found in links on each site’s body.
How smart crawling works:
- Branching out only to the predefined number of links found per one hostname.
- The maximum number of allowed different hostnames per one domain (avoids subdomain crawling hell e.g. blogspot.com).
- Can be restarted with the same list of domains – last saved domains are added to the URL queue.
- Crawls only sites that return text/html Content-Type in HEAD response.
- Retrieves site body of maximum 1MB size.
- Does not save inaccessible domains.
How to run?
Usage
Copyright (c) 2017 Kuba Gretzky
Source: https://github.com/kgretzky/