Real-life threat hunting stories
The world of threat detection and threat hunting is filled with long days of nothingness punctuated by the rare moments of OMG EVERYTHING IS ON FIRE. That excitement often has a happy ending when it turns out the alarm was just a drill and the threats were hired to test the team’s readiness.
Organizing a well thought out attack drill, also known as a red team exercise, is a good way to measure the effectiveness of your threat detection or threat hunting program. Drills also make for good stories to share because all the facts are available, which isn’t always the case in a real data breach.
Threat Detection vs Threat Hunting
Before we jump into some exciting stories, let’s take a brief detour into cybersecurity terminology. What is the difference between threat detection and threat hunting?
Threat detection is when your system tells you that there appears to be a bad guy getting into places where he shouldn’t. This is an area where security teams have traditionally invested much of their energy. A computer worm copying bad files using a Windows server exploit is doing something evil which can be seen for what it is if you know what you’re looking for.
With the transition to the cloud, however, vulnerabilities shifted from insecure code to insecure configurations. Attacks in cloud-centric infrastructure will usually take advantage of legitimate functionality in a way that’s unintended by the victim and profitable for the attacker.
As a result, threat detection in the cloud remains a huge challenge. Most solutions in this space use anomaly detection (“that’s unusual so might be bad”) or threat intelligence (“that action is coming from a known bad IP address”). The limitations of threat detection in the cloud is elevating the importance of threat hunting.
Threat hunting is like doing a research project on your security logs. Unlike threat detection, threat hunting is a manual investigation effort that starts with expert defenders coming up with a thesis like “if there was a threat actor in this system, she’d be doing X with side effect Y.” Not having to describe anything more than a theory means that threat hunting can be effective at catching the “ghosts in the cloud” that are top of mind for enterprise security teams.
Interestingly, security data lakes with their unlimited storage capacities are blurring the line between threat detection and threat hunting. The following real-life stories of threat hunting in cloud-centric environments reveal an emerging trend where threat detection is taking on properties of threat hunting while threat hunting becomes more reliant on tip-offs from data-driven threat detection.
First Story: Familiar Denials
This story is from Snowflake’s own security team, the cybersecurity equivalent of the Harlem Globetrotters. The team had been spending time studying hacker toolkits, especially those that were breaking new ground in attacking infrastructure hosted in AWS. One of these toolkits was the Pacu offensive framework released by Rhino Labs together with a series of excellent blog posts.
Analysis of these attack tools guided the threat hunting tactics that would soon pay off. Historically, hacker groups at all levels tend to read the same literature and even reuse the same toolkits. The Iranian APT group known as CopyKittens, for example, successfully compromised targets using the open source Metasploit toolkit and a trial version of its commercial cousin Cobalt Strike. By studying Pacu’s payloads, the Snowflake security team knew what kind of “access denied” errors would be triggered by the use of the hacking tool in its environment.
A red team exercise was initiated a few months later without the knowledge of most members of the security team. The hired penetration testers, using techniques that had proven successful at previous engagements elsewhere, began feeling around the environment. Their reconnaissance techniques, however, had been influenced by the same payloads studied by the team and tripped alarms that pointed Snowflake’s threat hunters to the vicinity of the red team. Having log data from both the servers and the cloud environment in one Snowflake database enabled the hunt team to investigate related activity and quickly uncover the pentesters.
In a debrief session after the exercise was completed, a member of the red team wrote:
This exercise demonstrated the importance of research and preparation for effective threat hunting.
Second Story: Who’s Laptop Is This Anyways?
This story was shared with permission by Hunters.ai whose solution hunts for threats across terabytes of customer log data. In this case, their customer had hired a contractor to help provision a new service in AWS. Things started heating up when the Hunters “autonomous hunting engine” uncovered a disturbing pattern.
As the contractor’s AWS user was making changes to the cloud environment, the contractor’s laptop was suspiciously idle. The logs from the laptop did not show the kind of browser or script process activity that would be expected during cloud administration. While the public source IP of the cloud administration was not unusual, and the AWS commands were not obviously malicious, something funky was going on.
Tipped off by Hunters.ai to this strange discrepancy, the customer reached out to the contractor and asked for clarification. It turns out that the contractor had gone against the customer’s instructions and used a different laptop than the one provided to them. Probably this is a common shortcut of convenience for contractors but they’re certainly not used to getting called out on it.
The concept of correlating between cloud and endpoint logs is exciting, and usually would be done manually by a threat hunting team. An automated solution that can connect these dots combines the accuracy of threat hunting with the timeliness and scalability of threat detection.
Third Story: Multiple Dimensions of Evil
More than six months before the Capital One breach put this hacking technique in the spotlight, the team at Hunters.ai was analyzing customer usage of instance profile credentials. This is a legitimate AWS feature so a threat detection rule would be too noisy to cover this activity.
Instead of a traditional alert rule, the threat hunting team at Hunters.ai used instance credential access as a starting point for analyzing multiple dimensions of related activity. These included actions typically performed by the user, what team the user belongs to according to HR records, where the user usually connects from, what role the EC2 server usually has, how is the server tagged in asset inventory, and so forth.
Taken together, these dimensions paint a much more complete picture than a simple rule that triggers on any use of instance credentials. In the vast majority of cases, such activity is expected and authorized. In this case, however, the additional dimensions pointed to “evil” and the Hunters.ai solution flagged the compromised server which enabled their customer to quickly respond to the situation.
It’s a capability that would have served Capital One well when they were hit by a similar attack over the summer.
Tying these stories together
The common thread in successful threat hunting stories is effective preparation. Threat hunting teams need to study the techniques used by threat actors, penetration testers, and security researchers. This requires significant investment in time and resources, especially in emerging areas such as multi-cloud infrastructure and Kubernetes.
Another form of preparation involves the data serving the threat hunters. Your adversaries will move from endpoints to corporate directories to cloud APIs and back again. This means hunting them down requires unifying and normalizing data from all systems that may be involved. Achieving and maintaining comprehensive visibility is an important challenge to tackle.
As shown in these stories, a rich set of security data that’s analyzed with attacker techniques in mind enables hunting down threats in a broad range of scenarios. Instead of defenders needing to succeed all the time and attackers needing to succeed just once, you can flip the script and apply pressure on the bad guys. If they slip up just once, you’ll have them.