Skip to main content

Alerting Framework

All the data in Hyperion has no value if it's not leveraged in any meaningful way. Consequently, Hyperion doubles as a detection tool.

How does Hyperion compare to Splunk?

The comparison between Splunk and Hyperion is obvious, so here's a few key differences between these 2:

Real-time vs. Historical

Splunk is best suited for alerting on recently ingested events, especially when the event itself contains all information needed to trigger an alert (such as suspicious command line execs). Hyperion on the other hand is better suited for alerting on events that need to be correlated with data that can be 1 day/week/month/year old.

Example

An IP address involved in an alert was used in an Okta login.

// Look for any "chains" of nodes where a LogonAttempt is related to an IPv4
// which itself is related to a HiveAlert
MATCH (logon:LogonAttempt)-[:LOGON_MADE_FROM_IP]->(ip:IPv4)->(alert:HiveAlert)
RETURN logon.time, ip.value, alert.name ORDER BY logon.time DESC

Single data source vs. Correlated

@hile you can enrich events in Splunk, doing so is not trivial as it requires maintaing lookup tables/KV stores. Hyperion is built with enrichment as a key functionality, so writing alerts that need to leverage multiple sources of enrichment in order to detect badness is trivial and you can easily write alerts that combine multiple data sources without needing to write insanely complex JOINs.

So, as a rule of thumb: if your alert is looking a single event/data source like CarbonBlack telemetry, use Splunk. If you need to enrich, or combine multiple data sources, use Hyperion.

Example

An IP address involved in an Okta login had port 8443 exposed within the last 7 days AND is a ExpressVPN node located in Russia.

// First we look for IPs related to LogonAttempts made within the last hour
MATCH (logon:LogonAttempt {time: DateTime() - duration({hours: 1})})-[:LOGON_MADE_FROM_IP]->(ip:IPv4),
// We then specify that our IP needs to also be connected to a VPNProvider of a certain name
(ip)-[:IP_PART_OF_VPN]->(vpn:VPNProvider {name: "ExpressVPN"}),
// And a 8443 service seen in the last 7 days
(ip)-[:IP_HAS_SERVICE]->(serv:Service {port: 8443,
l_seen: DateTime() - duration({days: 7})})
// And where the IPs loc (human-readable geolocation string) starts with "ru."
// The syntax for "loc" is country-code.region.city
WHERE ip.loc STARTS WITH "ru."
RETURN logon.time, ip.value ORDER BY logon.time DESC

Behind the scenes uses 3 enrichment sources: Spur.us for VPN detection, Maxmind for GeoIP and Censys for service discovery. But you don't need to know this because in most cases the data is already there ready for you to use, since Hyperion enriches entities at time of ingest.

Atomic vs. Parametrised

Another benefit of Hyperion is that it can look for patterns which are not yet known. For example, Hyperion can look for filenames/command-lines mentioned in Threat Reports and perform substring searches on the CarbonBlack dataset to see if that command-line has been seen internally. The same can technically be achieved in Splunk but again, you'd need to maintain a lookup table for every search like this, whereas Hyperion already contains the needed data.

Example

A command line seen in a threat report has been seen in a process execution internally.

// Look for all CommandLines seen in NewsArticles
MATCH (report:NewsArticle)-[:NEWS_ARTICLE_REFERENCES]->(cmdline:CommandLine)
// You need to have a WITH clause between MATCH statements
WITH cmdline, report
// Now look for all ProcExec's with a CommandLine
MATCH (proc:ProcExec)-[:PROCESS_EXECUTED_WITH_CMDLINE]->(cmd:CommandLine)
// And filter for just those where the CommandLine used in the process
// contains one mentioned in the threat report
WHERE cmd.value CONTAINS cmdline.value
RETURN proc, cmdline, cmd, report