stoq v3.0.0 releases: open source framework for enterprise level automated analysis
stoQ is an automation framework that helps to simplify the more mundane and repetitive tasks an analyst is required to do. It allows analysts and DevSecOps team the ability to quickly transition from different data sources, databases, decoders/encoders, and numerous other tasks. It was designed to be enterprise-ready and scalable, while also being lean enough for individual security researchers.
Fundamentally, stoQ is structured to be in the middle of your analyst workflow. It works best when it can leverage other security tools to ingest objects and a database to store the output. Let’s take a look at how stoQ is structured.
For sources, stoQ can ingest data from individual files, monitor a directory for new files, pull from a database, or from an API. This is where stoQ at scale can be extremely powerful. Extracting files from threat vectors like HTTP or e-mail can be sent to stoQ for automatic enrichment and processing. This is when file extraction from Suricata or Bro can be used to ingest files for stoQ processing. Sending all executables, PDFs, or Office Documents would allow us to automatically analyze our higher risk file types as they hit our network.
Reader, Decoder, Extractor and Carver plugins that can be used to run the gamut of common activities against ingested files. Use reader plugins to extract text and look for keywords. Use decoder plugins to automatically handle XOR-encoded content or decode base64 strings. Extractors can automate the tasks like decompressing archives and deflating streams in PDF documents. Carver plugins are used to extract hidden payloads, like shellcode or flash files embedded within Word Documents. These enriched objects are then passed back to the stoQ framework for additional processing.
The Worker class plugins are what interact with external systems to allow objects to be processed elsewhere and return the information back to stoQ to be associated with the object. This allows stoQ to interact with scripts (exiftool, TRiD, Yara, etc.) and APIs (FireEye, VirusTotal, ThreatCrowd, etc.) and get even more data about our objects. Remember, we are able to automate all of this and can quickly scale to requirements.
Once stoQ has ingested, enriched, and processed an object, the results are sent to a Connector plugin for storage. This can be as simple as a regular text file or a database, or as complex as multiple databases spread across multiple data centres.
Leveraging that data with something like ElasticSearch or Splunk can give us a very rich resource of metadata for the objects that have passed through stoQ. This large and detailed dataset can be used to find larger trends and anomalies in your environment. stoQ enables you to craft queries and alerts for all of this metadata.
- Support for asyncio within the framework and plugins
- Better type checking support
self.logmay be leveraged from within all plugin classes
- Add new
Error()class for standardizing errors from stoQ and plugins
Error()will track plugin name, error message, and payload_id (optional)
- Add configuration properties from
[Documentation]to each plugin object when loaded
PayloadMetanow has a
Allows payloads to be logged and archived, but not scanned by worker plugin.
Payloadis now updated as results are completed.
Results from completed scans will be available to other plugins instantly
Request()class is passed to all dispatchers, workers, and archiver plugins.
Requestobject contains all payloads, request metadata, results, and errors from
all other completed plugins. This will allow for all neccessary plugins to have a full
understanding of the current state of the complete
WorkerPlugins now have a configuration option of
This allows for chained worker dependencies. If
required_workersis defined, the
parent plugin will not be run until all required plugins are completed successfully. The
parent plugin may then use results from other completed plugins for their respective
- Duplicate extracted payloads are no longer simply skipped, they are appended to
stoq.helpersto extend options for
Stoqand plugin configurations.
- Parallelization is performed across all of the plugins that can run in a given round,
instead of parallelizing across all of the plugins to perform on a given payload (#147)
plugin_nameis set to the name of the plugin class in case
Nameis not defined in
the plugin’s configuration.
PayloadResultsis now an object of
Payload.results, rather than an independent object
- Most objects have been removed from
Payloadand are now availabe in
PayloadResults.plugins_runsand is now a
PayloadResults.workersis now a
Dict[str, Dict]rather than
PayloadMetais now an object of
PayloadResults.extracted_byis now a
PayloadResults.extracted_fromis now a
- Dispatchers run on each payload every round, instead of once per payload. This allows
the dispatcher to take advantage of the request state model. (#147)
- Worker plugins can specify additional plugins to run on the payload they scan, effectively giving them dispatch capability.
With YARA, for example, this allows us to directly scan with YARA and dispatch
to other plugins by running YARA once. Otherwise, we would run YARA as a dispatcher,
and then immediately run YARA again as a worker plugin. (#147)
- Archivers run at the very end along with connectors and decorators because we no
longer scan a payload to completion at once. (#147)
- The default value for max_recursion has increased because the average number of
worker rounds taken to complete a scan is expected to increase. (#147)
- DeepDispatcher plugin class has been removed
Payload.plugins_runhas been removed in favor of
Payload.worker_resultshas been removed in favor of
RequestMetais no longer passed to plugins, in favor of the
plugins_optshas been removed from plugin
__init__function. All plugin configuration options
are only available in
Copyright 2014-2017 PUNCH Cyber Analytics Group