What
On Dec 29, 2016, the Department of Homeland Security jointly released a report with the Directorate of National Intelligence regarding supposed "Russian interference" in the 2016 Presidential Election. The report mostly detailed malicious actor behaviors not uncommon to most cyber security professionals. Included in the release were recommendations to blunt intrusion attempts by state actors and the indicators of compromise related to the "GRIZZLY STEPPE" campaign carried out by Russian Military and Civilian Intelligence Services. A separate intelligence community assessment was also released. The report has circulated widely since the election of a new President with various experts weighing in as to the validity of the report. While this author finds the report dubious, the intent here is to leave it to the reader to make their own judgment through the exercise.
Request for Assistance
DHS and DNI jointly requested assistance from the public to contribute additional information they may have related to the threat. Using the indicators provided by the report, this guide shows how to detect this activity in Splunk and assess its impact.
Indicators of Compromise
The first step is to download the CSV hosted by US-CERT to your own Splunk instance. There are two methods: Splunk Web or the command line using wget. Depending on your environment, one may be easier than the other.
Select an app to save the lookup file. It could be the search app or a custom application. In this example, a custom app called security_viz is used:
$ cd /opt/splunk/etc/apps/security_viz/lookups
From the lookups directory of your app, download the lookup from US-CERT:
$ wget https://www.us-cert.gov/sites/default/files/publications/JAR-16-20296A.csv
Check the permissions of your knowledge objects by reviewing the configuration of your local.meta or default.meta in the app you selected:
$ cat /opt/splunk/etc/apps/security_viz/metadata/default.meta
# Application-level permissions
[]
access = read : [ * ], write : [ admin, power ]
### EVENT TYPES
[eventtypes]
export = system
### PROPS
[props]
export = system
### TRANSFORMS
[transforms]
export = system
### LOOKUPS
[lookups]
export = system
### VIEWSTATES: even normal users should be able to create shared viewstates
[viewstates]
access = read : [ * ], write : [ * ]
export = system
For the GUI method, download the CSV to your local desktop and follow the Splunk Enterprise documentation on configuring lookups.
Searching in Splunk
Filter to threat intel where only IPv4 addresses are provided. Remove the left and right brackets around the octets from the value of the INDICATOR_VALUE field:
| inputlookup JAR-16-20296A.csv where TYPE=IPV4ADDR
| rex field=INDICATOR_VALUE mode=sed "s/\[|\]//g"
Use the crafted lookup as a subsearch to correlate traffic from any data source:
index=suricata [ | inputlookup JAR-16-20296A.csv where TYPE=IPV4ADDR | rex field=INDICATOR_VALUE mode=sed "s/\[|\]//g" | rename INDICATOR_VALUE as src_ip]
Known Tor Addresses?
One claim made by many journalists and security experts is that the list includes known Tor addresses in the indicators of compromise, which makes the report dubious in its attempt to attribute the activity solely to Russia (reference). This claim should be taken seriously, as various exit nodes in the Tor network were previously used by Ghost Net (a Chinese-based APT) for exfiltration of sensitive governmental information which were intercepted by WikiLeaks (reference).
To test this claim, the IOCs from the report can be compared against known Tor addresses in Splunk.
First, download the list of known Tor addresses from the time period specified in the report and merge them into a CSV:
$ wget https://collector.torproject.org/archive/exit-lists/exit-list-2016-05.tar.xz
Second, configure your lookup in Splunk.
Third, write a search which compares the two lookup files against one another.
Conclusion
This exercise demonstrates how Splunk can be used to evaluate government-released threat intelligence, cross-reference indicators against known infrastructure like Tor exit nodes, and arrive at your own evidence-based assessment of the attribution claims. The tooling is straightforward -- the harder question is what the data actually tells you.