S3Scanner v1.0 released: Scan for open S3 buckets and dump
A tool to find open S3 buckets and dump their contents
- Major: The
checkBucket()function was changed to use boto to check for buckets instead of GET’ing the page out on the web. This is better for several reasons:
- Buckets that are not open to the public, but are listable or are only open to authenticated users are now properly found
- Regions are no longer needed
- Due to the change to use boto, AWS credentials are nearly required. The tool will still run without them, but results will be incredibly inaccurate. Users will receive a warning if credentials are not found.
buckets.txtfile now contains only bucket names instead of
- The screen output now says whether or not the bucket was found. The concept of ‘open’ vs ‘closed’ buckets no longer exists. This may change in the future.
- The newly added
checkBucketWithoutCredswill now issue a maximum of 2 requests to check if a bucket exists. This helps ease the issue of 503’s being returned intermittently.
- When dumping a bucket, the user can see that it’s being dumped but doesn’t get the ugly output. This is to allow the user to still cancel with Ctrl+C. A dumping progress feature will most likely be implemented in the future.
- If a bucket doesn’t allow listing of it’s contents, the size will be “AccessDenied” or “AllAccessDisabled”.
getAcl()to try to get the ACLs associated with found buckets. They’re currently only output to the screen.
- Removed: The
--default-regionargument. The new way of checking if buckets exist doesn’t need the bucket’s region and neither do any of the other functions. We’re region-free now baby
- Added: The
--versionargument. Pretty self-explanatory
- Removed: The
--include-closedargument. Now that the tool is more self-aware of the permissions on a bucket, it can be hard to determine what makes a bucket “open” or “closed”. Disabling for now until I determine a better way to handle it.
- #33 – Public-ness is not accurate – Using boto now fixes this
- #32 – Buckets in bucket-stream form return ‘not found’ –
s3scanner.pynow parses the bucket name out and ignores the region
- #43 – Cancelling bucket dumping doesn’t work – Implicitly fixed because now the dump process is not in the foreground.
- The travis-ci config was changed to allow for 4 total jobs: run Python 2.7 and 3.6 and run each version with and without AWS credentials configured.
- A testing requirement matrix was created in the wiki to allow for easier tracking of test coverage.
- Added tests:
- Removed tests:
(Optional) virtualenv venv && source ./venv/bin/activate
pip install -r requirements.txt
The tool takes in a list of bucket names to check. Found S3 domains are output to file with their corresponding region in the format ‘domain:region’. The tool will also dump the contents of ‘open’ buckets locally.