recursebuster v1.6.11 releases: rapid content discovery tool for recursively querying webservers
HEAD Based Checks
For servers the support it, HEAD based checks speed up content discovery considerably, since no body is required to be transferred. The default logic is to use a HEAD request to determine if something exists. If it seems to exist, a GET is sent to retrieve and verify. If there are sensitive pages that perform actions (AKA, ones that don’t really follow the HTTP Verb Spec), a file containing a list of exact URLS that should not requested can be blacklisted with the
When a directory is identified, it gets added to the queue to be brute-forced. By default, one directory is brute-forced at a time, however you can ‘cancel’ a directory interactively by hitting ‘ctrl+x’ if in UI mode. If you’re not in UI mode (
-noui), you need to have added the directory to the blacklist.
Since we are getting the page content anyway, why not use it to our advantage? Some basic checks are done to look for links within the HTML response. The links are added, and any directories identified added too. By default, only the supplied host is whitelisted, so any links that go off-site (rather, to a different domain) are ignored. You can specify a file that contains a list of whitelisted domains that you are OK with including into the spider with the
Gobuster is pretty fast when you smash
-t 200, but who would do that? One of my goals for this was to keep performance on-par with gobuster where possible. On most webservers, recursebuster seems to be faster, even though it sends both a HEAD and a GET request. This means you will hit WAF limits really quickly, and is why by default it’s
The ability to use a proxy is fairly useful in several situations. Not having to drop tools on a host in order to scan through it is always useful – recursebuster also works through burp if you specify it as a http proxy. When using Recursebuster to supplement the burp sitemap – use the
-stitemap option to send only the ‘found’ or interesting responses to burp, this should help avoid filling up your HTTP History with 404’s.
- fix content length breaking soft 404 stuff
go get -u github.com/c-sto/recursebuster
recursebuster -u https://google.com -w wordlist.txt