Interrogating Bias in Incident Response

"Crime is common. Logic is rare. Therefore it is upon the logic rather than upon the crime that you should dwell." - Sir Arthur Conan Doyle 6 minute read Published: 2022-11-03

It's Friday afternoon. It always happens on Friday afternoon. You're ready to be done for the week, having closed out a pesky ticket that took far too long. Just as you're about to lock the screen and punch out for the day, you watch the email arrive—almost in slow motion—with that dreadful tagline:

URGENT: Account Compromised

Goodbye to your Friday evening. You don't get to sit down and watch the game. You don't get to enjoy a nice dinner with the fam. Because you, through a series of questionable life choices, have made your way to the role of Lead Incident Responder. The clock is ticking, and all eyes are on you.

And you know you have at least 2 adversaries: the criminal trying to cause your organization harm, and your own flawed, bias-prone brain.

We can't control the attacker's actions and intentions, but we can master our own mind. When investigating an incident, I find it all to easy to jump to conclusions based on prior assumptions or inductive reasoning. I am fortunate to work with a team that asks a lot of questions of each other during incidents, steel sharpening steel, to ensure nothing gets missed and that we don't misinterpret the data.

Because I know you're busy, here are the questions up front, but do read on for the details.

  1. Do I know the start and finish of the event?
  2. Am I seeing everything I need to see?
  3. Is this normal for my environment?
  4. Is this consistent with expected behavior?

Traveling Through Time and Space

Do I know the start and finish of the event?

Your first order of business during an incident is to establish scope. The event that triggered an alert may not be the start of the activity—in fact, for all but the most basic of attacks, it certainly wasn't. That means you're entering the story in medias res, but you need to find the beginning.

So time bias is the first mental hurdle to overcome. Ask yourself: "Am I seeing the entire story, start to finish?" Models like MITRE ATT&CK and the Lockheed-Martin Cyber Kill Chain an assist with this. A complete time window will comprise initial compromise through detection, eradication, and recovery.

The Great Unknown

Am I seeing everything I need to see?

Baldur's Gate

I have no idea what's about to happen. Just the way I like it.

I played a lot of Baldur's Gate and its sequels as a kid. Like, a lot. Like, I faked being sick for a week to start and finish Neverwinter Nights. One thing those games taught me was to explore the whole area. When you enter a new space, you can't see everything. Some folks refer to this as the "fog of war," but I reject militarizing every damn thing in cybersecurity, so I prefer to think of it as "the great unknown." It's important to recognize the limits of your knowledge. If you enter a dark room, you know neither how large the room is, nor what it contains.

In incident response, hopefully you're getting some information about the event—that's where the alert came from, right? Even user-reported activity is a form of data.

But are you getting all the data you need? Reporting Bias, or a form of it, is our next opponent. What you think is happening on an endpoint is informed by your telemetry. But what if there are pieces missing, either by configuration or malicious activity? Is there no evidence of lateral movement because there in fact is no lateral movement, or because your endpoints have no means of monitoring all network connections?

Do you only have endpoint data, or do you have network telemetry like netflow or packet captures that you can time-correlate to events on the endpoint?

The answers to these questions don't all have to be "yes," to succeed (although the more, the better). But knowing where your blind spots are and how to account for them will prevent incorrect assumptions and conclusions during an investigation.

Not All that Alerts is Odd

Is this normal for my environment?

This one is especially important for new analysts/responders. When first examining an alerting endpoint, you will see a lot of activity that looks suspicious without context. Hell, I have an entire site dedicated to cataloging innocuous things that look malicious.

"Whoah that's running PowerShell. As SYSTEM? That's gotta be evil."

"Who would run whoami this often unless it was a C2?"

"Ah ha! A giant blob of base 64-encoded PowerShell! I have you now!"

Yeah...all of these are normal and performed by legitimate software. Without good filtering, it's easy for this noise to blot out the real malicious activity you're looking for.

This is prevalence bias, a form of confirmation bias, at work. Is the weird thing really that weird, or does it show up on every endpoint? Global prevalence is similar in nature to the time bias in that both require the investigator to take a wider perspective, considering more than what's in front of their nose.

If you're unsure about whether a thing is actually strange, some first stops should be xCyclopedia, WTFBins, and LOLBAS. Just as importantly, use some data analysis skills on your telemetry to determine the global prevalance of the activity in your environment. If it's rare, that's something to consider.

If it Walks Like an Evil Duck

Is this consistent with the Expected Behavior?

Often, an alert will be for a specific flavor of malware. In order to know what this means in a broader sense, the analyst will need access to solid threat intelligence. Who operates a specific dropper, for example, and what do they do after initial compromise? When an alert names a specific malware family, it is often useful to compare the observed behavior against expectations.

That's not to say a mismatch means there's nothing to worry about. A bad attribution doesn't a false positive make. It could well be another malware variant, or even an evolution of a group's techniques. Nevertheless, knowing whether there's consistency between expected and observed behavior is essential. For one thing, understanding expected behavior tells the investigator where to look for likely further evidence of malicious activity.

Go Armed with Knowledge

It is a short, but invaluable list of questions I ask myself on every investigation. My team asks these of each other as well, making sure we do not get locked in on one theory without sufficient evidence. We strive to learn the whole story of an incident to ensure not only that we fully eradicate the threat, but that we're more prepared for the next attacker.