dsiem v0.33.5 releases: Security event correlation engine for ELK stack
Dsiem provides OSSIM-style correlation for normalized logs/events, performs lookup/query to threat intelligence and vulnerability information sources, and produces risk-adjusted alarms.
- Runs in standalone or clustered mode with NATS as messaging bus between frontend and backend nodes. Along with ELK, this made the entire SIEM platform horizontally scalable.
- OSSIM-style correlation and directive rules, bridging the easier transition from OSSIM.
- Alarms enrichment with data from threat intel and vulnerability information sources. Builtin support for Moloch Wise (which supports Alienvault OTX and others) and Nessus CSV exports. Support for other sources can easily be implemented as plugins.
- Instrumentation supported through Metricbeat and/or Elastic APM server. No need an extra stack for this purpose.
- Builtin rate and back-pressure control set the minimum and maximum events/second (EPS) received from Logstash depending on your hardware capacity and acceptable delays in event processing.
- Loosely coupled, designed to be composable with another infrastructure platform, and doesn’t try to do everything. Loose coupling also means that it’s possible to use Dsiem as an OSSIM-style correlation engine with a non-ELK stack if needed.
- Batteries included:
- A directive conversion tool that reads the OSSIM XML directive file and translates it to Dsiem JSON-style config.
- A SIEM plugin creator tool that will read off an existing index pattern from Elasticsearch, and creates the necessary Logstash configuration to clone the relevant fields’ content to Dsiem. The tool can also generate basic directive required by Dsiem to correlate received events and generate alarm.
- A helper tool to serve Nessus CSV files over the network to Dsiem.
- A light weight Angular web UI just for basic alarms management (closing, tagging), and easy pivoting to the relevant indices in Kibana to perform the actual analysis.
- Obviously a cloud-native, twelve-factor app, and all that jazz.
On the diagram above:
- Log sources send their logs to Syslog/Filebeat, which then sends them to Logstash with a unique identifying field. Logstash then parses the logs using different filters based on the log sources type, and sends the results to Elasticsearch, typically creating a single index pattern for each log type (e.g.
suricata-*for logs received from Suricata IDS,
ssh-*for SSH logs, etc.).
- Dsiem uses a special purpose logstash config file to clone incoming events from log sources, right after logstash has done parsing it. Through the same config file, the new cloned event is used (independently from the original event) to collect Dsiem required fields like Title, Source IP, Destination IP, and so on.
- The output of the above step is called Normalized Event because it represents logs from multiple different sources in a single format that has a set of common fields. Those events are then sent to Dsiem through Logstash HTTP output plugin, and to Elasticsearch under index name pattern
- Dsiem correlates incoming normalized events based on the configured directive rules, perform threat intel and vulnerability lookups, and then generates an alarm if the conditions of the rule are met. The alarm is then written to a local log file, that is harvested by a local Filebeat configured to send its content to Logstash.
- At the logstash end, there’s another Dsiem special config file that reads those submitted alarms and pushes them to the final SIEM alarm index in Elasticsearch.
The final result of the above processes is that now we can watch for new alarms and updates to an existing one just by monitoring a single Elasticsearch index.
Copyright (C) 2018