The How vs the Who: An Argument Against Attribution & Hack Back

— November 8, 2016

A lot of organizations focus their efforts on identifying external actors, distinguishing between different groups that may be attempting malicious activity. At some organizations, this is relevant due to the defender’s sophistication, capabilities, and relationships. However, they are the 1%-ers and have many of the same difficulties that we are about to explore.


For the 99%, there is an unhealthy fascination around actors, attribution, and the “who done it?” The 99% believe that this information is both accurate and actionable. This belief has been propagated by security vendors; Hollywood’s portrayal of hacking and defense; and the fourth estate’s fascination with spy thriller storylines like the DNC breach and its role in the US presidential election.


Sadly this fascination has caused an over investment (a number greater than zero) in tooling that purports mystical cyber command capabilities. Even worse, this regularly enters into prioritized budgets driven by boardroom conversations like “what if China steals all of our source code?” While such questions may be reasonable for some organizations, this should not translate into a defensive strategy that is specific to this category of threat. Your organization has other valuable targets — the credit card on file with AWS, user information, a world writeable S3 bucket, etc. — and must focus on a comprehensive security strategy instead.


Your team’s goal is to detect weird or bad behavior, confirm or debunk the incident, identify the how but not the who, and deploy remediation while taking proper organizational steps (disclosure, legal, HR, PR, etc.). Your strategy and tooling should follow this line of thinking.


For those who are still not convinced or think they still want to know who done it, let’s dig in.


Let’s Assume You Nailed Attribution


We’ll hand wave away how you achieved bullet proof attribution and validated its accuracy. Now what?


You probably can’t contact the actor directly. If you can, what are you going to do? Ask them to please stop? You’ll end up bargaining and discussing payoffs. If you pay, you’re incentivizing them to continue their behavior with other organizations or to continue attacking you anyway. Either way, what if they post your communication on public channels?


While you can and should provide the information to law enforcement, there is no guarantee that it meets their requirements for investigation or that the actors are within their relevant jurisdiction. Again, this changes depending on who your organization is and the subsequent amount of access.


Hacking back is controversial and requires a longer discussion. For the purposes of this discussion, it’s likely illegal where you live or operate, so don’t do it. Especially since you likely misattributed.


There is little-to-no additional benefit to Indicators of Compromise (IOCs) since the detection methods remain the same. Tracking changes in behavior doesn’t require knowing the actor’s identity. In fact, most security operations teams who do track attribution instead group the IOCs with internal code names, creating an infinite rolodex of unlinked aliases. For example, if an actor shifts their behavior and source of the attack, how would your team identify the same actor? Is doing so an automated task? If not, is it worth spending scarce security expertise on this task?


Don’t forget, all of this assumed you were right to begin with.


Spoiler Alert, You Got the Attribution Wrong


Attribution is incredibly difficult. Governments have problems attributing attacks, and often have educated guesses that depend on years of research and luck. Businesses do not have the resources to be in this business, arguably not even the 1%-ers. Security expert Bruce Schneier has said this about attribution: “You used to be able to tell who attackers were by the weapons they used. Governments used tanks, so if one rolled up outside your house, you’d know a government was behind it. Online everyone uses the same tools and techniques, so it’s hard to tell whether the attack was from a government source, or two guys in a basement.”


One reason the weapons look the same is that malware source code is reused and resold all the time. Malware has a business model as well; you may be surprised to hear that commercially available malware packages tend to include support contracts and code generators to customize payloads. Truly advanced authors may even try embedding signatures to suggest that their malware was authored by another party. Example signatures include variable names and spelling, local computer time zone during compilation, methods of persistence or command and control (C2), etc.


There is also the issue of shortest path for the actor. Why would an actor bother inventing their own path into your organization or environment if they can reuse existing paths? Invention is far more difficult and expensive than reuse, one reason why seemingly old and outdated software vulnerabilities continue to pose issues for organizations. The ROI for an actor using a common attack versus a novel one is far better and has the benefit of blending in with everyone else who uses the same attack method.


The consideration of ROI is one reason why botnets are so popular as a force multiplier, scanning the internet for machines exposing known vulnerabilities. It’s fairly uncommon these days for a breach to occur where someone’s fingers are on a keyboard because they want that specific target machine.


The actor’s geolocation? While security vendors and Hollywood both want you to think this is important, the reality is that geolocation is not a smoking gun. It’s trivial to route traffic through different locales. Either buy a virtual/cloud server in a region or take over someone’s computer. It’s like assuming the return-to-sender on an envelope is 100% accurate when really anyone could have written it.


If someone sends you a bomb with a return address, are you going to assume it’s accurate? For hack back fans, would you send your own bomb to that address? Are you really that sure? Are you really so sure that none of your servers or IoT devices never took part in an attack, and that by your logic no one is justified in attacking your infrastructure?


Concluding with Value and ROI for the 1%


You really have to ask yourself what the value of attribution is to your organization, given the level of effort to achieve such lossy results.


If you are a 1%-er or are close to it, then you probably have a sizeable and qualified analyst team. It frustrates me when these organizations focus on acquiring tools that inform them of attribution. I understand the fundamental desire to reduce headcount cost with tools, especially when security talent is so hard to find and is so expensive, but instead these organizations should be focusing on making that headcount more effective.


If you can increase the productivity of the analyst team you’ve invested in, taking a job from hours of effort to minutes, then those dollars spent on those qualified individuals are far more efficient than buying a lossy black box. The key is to provide your team with contextualized and actionable data that they can glance at, understand, and quickly decide whether to continue analysis or escalate for further operational action.

Business & Finance Articles on Business 2 Community

Author: Sam Bisbee


View full profile ›

(10)