The hardest security metric

Omer Singer
3 min readJan 16, 2019

--

For a data-driven security team, there’s one number that can never be known. This number is not just unknowable, it’s also critically important. If one metric mattered, it might be this one. This illusive ratio is your team’s False Negative (FN) rate, determined by how many threats you did not catch.

Never knowing how many things we missed is one of those facts of life for security teams. The problem is that we must have at least a sense for what we’re missing when we want to measure our effectiveness, to strike a balance between alerting that is not too noisy and not too quiet, and to decide where next to build up our capabilities.

There are several ways to approach this problem. If you were to check the dashboards of most security teams and vendors, you would see that the prevalent approach is to ignore this metric. This helps to explain why much of the cybersecurity conversation is around flashy and exotic attacks instead of the basics where most of the trouble actually takes place. Data-driven security teams should make an explicit decision to measure their false negative rate.

Since we can never know everything that we’ve missed, we must accept that our FN metric will be fuzzy and inaccurate. That’s okay. Accepting this principle will open measurement strategies that will be infinitely better than nothing.

For example, run a tabletop exercise with your team where you consider recent cybersecurity news articles as if they’ve affected your environment. A recent post described an Elasticsearch database that was accidentally exposed to the internet and leaked sensitive data. Simulate that this happened at your organization. Was the publicly exposed port detected or was that vulnerability detection a false negative? Did the remote queries (in theory) trigger alerts? What about the data exfiltration? Your simulation might identify ten relevant alarms, of which only two would have been triggered within your current setup- resulting in a false negative rate of 80%. That number is worth tracking and discussing in your next planning session!

A stronger but more involved approach is to actually simulate threat actor activity in your environment by bringing in a red team to carry out actual (authorized) threat actor activity. By closely comparing the known red team activity to the detections that lit up your alert feed, you can count how many attacks out of the total did not result in alerts. That’s your false negative rate, at least within the tested area.

The future may involve continuous measurement of false negatives based on automated attack simulation tools. While these solutions haven’t yet entered the mainstream, attack simulation software that is regularly updated with fresh attack techniques could be integrated and correlated with the security analytics solution to compare attempts to detections and track an up-to-date FN metric.

While it may seem obvious that you don’t know what you don’t know, security teams determined to be data-driven can get a quantifiable sense of what they’re missing. Tracking this number over time will help to guide planning decisions and balance out efforts to reduce alert noise. The hardest metric might also be the most valuable.

--

--

Omer Singer
Omer Singer

Written by Omer Singer

I believe that better data is the key to better security. These are personal posts that don’t represent Snowflake.

No responses yet