ParamSpider v1.0 releases: Mining URLs from dark corners of Web Archives for bug hunting/fuzzing/further probing

ParamSpider: Parameter miner for humans

ParamSpider allows you to fetch URLs related to any domain or a list of domains from Wayback archives. It filters out “boring” URLs, allowing you to focus on the ones that matter the most.

Mining URLs

Key Features :

  • Finds parameters from web archives of the entered domain.
  • Gives support to exclude urls with specific extensions.
  • Saves the output result in a nice and clean manner.
  • It mines the parameters from web archives (without interacting with the target host)


$ git clone
$ cd ParamSpider
$ pip install -r requirements.txt


1 – For a simple scan [without the –exclude parameter]

$ python3 –domain
-> Output ex :

2 – For excluding urls with specific extensions

$ python3 –domain –exclude php,jpg,svg

3 – For finding nested parameters

$ python3 –domain –level high
-> Output ex :

4 – Saving the results

$ python3 –domain –exclude php,jpg –output hackerone.txt


$ python3 –domain –exclude woff,css,js,png,svg,php,jpg –output bugcrowd.txt


Copyright (C) 2020 0xAsm0d3us