TorBoT v3.1.2 releases: Deep and Dark Web OSINT Tool
TorBoT – OSINT tool for Deep and Dark Web.
The basic procedure executed by the web crawling algorithm takes a list of seed URLs as its input and repeatedly executes the following steps:
- Remove a URL from the URL list.
- Check existence of the page.
- Download the corresponding page.
- Check the Relevancy of the page.
- Extract any links contained in it.
- Check the cache if the links are already in it.
- Add the unique links back to the URL list.
- After all URLs are processed, return the most relevant page.
Features
- Onion Crawler (.onion)
- Returns Page title and address with a short description about the site
- Save links to database
- Get data from site
- Save crawl info to JSON file
- Crawl custom domains
- Check if the link is live
- Built-in Updater
- Build visual tree of link relationship that can be quickly viewed or saved to an image file
- Visualization Module Revamp
- Implement BFS Search for webcrawler
- Use Golang service for concurrent webcrawling
- Improve stability (Handle errors gracefully, expand test coverage and etc.)
- Improve performance (Done with gotor)
Changelog v3.1.1
- Update release by @KingAkeem in #288
- Update README.md by @KingAkeem in #289
- Improving release process by @KingAkeem in #290
Use
Before you run the torBot make sure the following things are done properly:
- Run tor service sudo service tor start
- Make sure that your torrc is configured to SOCKS_PORT localhost:9050
On Linux platforms, you can make an executable for TorBot by using the install.sh script. You will need to give the script the correct permissions using chmod +x install.sh Now you should have an executable file named torBot, run ./torBot to execute the program.
- NOTE: All flags under -u URL, –url URL must also be passed a -u flag.
Read more about torrc here: Torrc
Install
Copyright (C) 2018 P5N4PPZ
Source: https://github.com/DedSecInside/