A New Kill Chain Approach to Disrupting Online Threats
If EU concerns about Meta sound like déjà vu to U.S. ears, they should.
Published by The Lawfare Institute
in Cooperation With
If the internet is a battlefield between threat actors and the investigators who defend against them, that field has never been so crowded. The threats range from hacking to scams, election interference to harassment. The people behind them include intelligence services, troll farms, hate groups, and commercial companies of cyber mercenaries. The defenders include investigators at tech companies, universities, think tanks, government agencies, and media outlets.
This is a seismic change from the 2016 U.S. presidential election cycle, when Russian attackers from GRU military intelligence and the Internet Research Agency exploited an environment that was not only largely undefended but also largely undefined. As we look ahead to 2024 and the many elections it brings, one great improvement is that there are now defenders who specialize in understanding and thwarting many different types of threats.
Increasingly, however, threat actors do not work in one threat area alone. An online network from Azerbaijan that Meta took down in 2022 combined hacking with influence operations. A Bolivian network combined influence operations and mass reporting, trying to get the social media accounts of news organizations and opposition members shut down for nonexistent violations. Threat actors routinely work across many different platforms, from the European anti-vaccine group that coordinated on Telegram to harass people across social media and in real life, to the Iranian cyber espionage operation that created malicious applications disguised as a VPN app, a salary calculator, an audio book reader, and a chat app.
As long as the defenders remain siloed, without a common framework to understand and discuss threats, there is a risk that blended and cross-platform operations like these will be able to find a weak point and exploit it.
A Kill Chain for Online Operations
To help break down those siloes between investigators in different fields, companies, and institutions, we have developed a framework to analyze, map, and disrupt many different sorts of online threats: a kill chain for online operations. This is by no means the first time the “kill chain” concept—which identifies the sequence of activities attackers go through in their operations and looks for ways to disrupt them—has been applied to the study of internet threats. In fact, we have taken inspiration from pioneering works such as the Lockheed Martin Intrusion Kill Chain and the Influence Operations Kill Chain. But this is the first kill chain designed to cover all types of online operations in which the target is a human being rather than a computer, from espionage and social engineering through influence operations to coordinated harassment or gang recruitment.
We conceived our framework by examining case studies conducted by our Meta threat-investigator colleagues. While the threat actors’ goals were very different—from selling merchandise to stealing someone’s login details—we repeatedly observed them using the same tricks. Whether using AI-generated profile pictures to disguise their accounts, or trying to steer people toward websites and smaller platforms where the defenses might be less robust, spammers, influence operators, hackers, and fraudsters used common techniques.
We designed our kill chain to identify and sequence those common behaviors—mapping the steps that bad actors take along the path toward their goal. Our goal is to identify the points along that path where the defenders can trip up the attackers. Disrupting any one of those steps can interrupt the operation. Ultimately, however, the defender’s goal is to “complete the kill chain” and find ways to disrupt them all.
At a series of round tables hosted by the Carnegie Endowment’s Partnership for Countering Influence Operations, we sought feedback on this framework from our peers at other platforms and in the research community. During this process, we realized that our collective field needs not only a map but also a dictionary: a shared body of terminology that describes the behaviors we see.
Even within individual disciplines, the vocabulary of threat analysis is ambiguous. One threat can have many names: Different researchers refer to the same threat as “misinformation,” “disinformation,” “malinformation,” or “influence operations.” Conversely, one name can be applied to multiple threats: the term “exploitation” has been applied to both executing unauthorized code on a target’s system and amplifying an influence campaign with bots, trolls, and “useful idiots.” In this paper, we have tried, wherever possible, to avoid terminology that has, or could have, a contested definition.
Disrupt, Compare, Share
As we gathered feedback, some experts asked us, “Are you trying to get better at disrupting these operations, or sharing information about them?” In fact, we have designed the kill chain for three main purposes: to disrupt, compare, and share.
The first goal is to enable disruption of online operations by identifying weaknesses across their life cycle, but especially in the earliest phases.
Take a hypothetical example: Threat researchers discover an online operation that registers fake social media accounts by using email addresses from a niche domain. It disguises them by copying pictures of a particular influencer, then uses the disguised accounts to spam-post links to websites on social media and elsewhere online. If the researchers prioritize ways to detect more accounts with the same combination of email domain and influencer photos, they can potentially identify further waves of fake accounts before they can even start spamming.
The second goal is to enable investigators to compare different operations—whether these are two operations of the same type, two operations of different types, or even one operation at two moments in time.
Let’s extend the hypothetical example: The researchers could look for the same combination of the niche email domain and the photos of the same influencer turning up in other cases. If they find that the same combination also featured in a newly discovered fraud case and an older spam network, they can investigate whether these three apparently separate cases were, in fact, connected.
The third goal is to enable sharing across the community according to a commonly understood framework that helps reduce the information noise and quickly understand the case. While considering appropriate privacy protections, threat research sharing is intended to “complete the kill chain” by enabling different teams to take action against malicious activity at many more points along the threat actor’s path.
To conclude our hypothetical example: The original researchers could share these specific insights with peers across the industry and the open-source research community who can then find and disrupt this malicious activity on their respective platforms and on the open internet.
Principles of the Kill Chain
To make the kill chain useful to the wide range of defenders we envisage, we built it according to six principles: It is tactical, observation based, platform agnostic, optimized for human-on-human operations, applicable to one or many platforms, and modular.
To make the kill chain tactical and observation based, it focuses on behaviors that can be directly observed or inferred with high probability. For example, if an operation tweets that it is inviting direct messages on Signal, it is legitimate to infer that the operation likely has a Signal account. The kill chain is not intended to map operations’ strategic intentions or authors: It maps the “how” of their behavior, not the “who” or the “why.”
It is platform agnostic, meaning that it can be applied to operations on all kinds of platforms (for example, social media and email services, among many others) and is designed to analyze operations that spread across one or many platforms. This can also include offline activity, such as registering companies or renting office space for trolls.
The kill chain is optimized for operations in which the source and target are human—whether the operation is an espionage team using a fake persona to befriend an engineer in a strategically sensitive industry or an influence operation recruiting an unwitting journalist. The kill chain focuses on these types of operations because we see the highest degree of commonality between different types of operations whose shared goal is to engage or entrap someone. The kill chain can be applied to machine-on-machine attacks, but it is not designed primarily with them in mind.
Finally, the kill chain is modular. Its phases follow a sequence, but not every operation will make use of every phase.
Ten Links in the Chain
The kill chain itself consists of 10 phases or links. This is longer than comparable frameworks, because most kill chains begin with the reconnaissance phase—traditionally the earliest phase that cyber defenders would be able to detect—while ours includes asset acquisition and disguise, both of which likely come before reconnaissance on social media.
Each phase describes a broad type of activity. To use the industry’s traditional framing, they are top-level tactics. Each tactic breaks down into a number of separate techniques, and each technique breaks down further into detailed procedures—explained in more depth in our paper. For example, we divide the tactic of acquiring assets into separate techniques, such as acquiring email addresses or phone numbers. The technique of acquiring email addresses is then broken down into procedures, such as acquiring encrypted or throwaway email addresses.
Acquiring Assets
In our view, online operations begin when threat actors start setting up the assets they need. This can include acquiring IP and email addresses, phone numbers, and social media accounts, but also acquiring bank accounts, crypto wallets, computer hardware, and office space, as well as registering companies.
Disguising Assets
Once an operation has acquired its assets, it will likely want to make those assets look convincing. At the most basic level, this is likely to involve adding a profile picture (stolen, non-human, or AI-generated) and a basic biography. In more sophisticated cases, it can include fake personas that are backstopped across many social media platforms (for example, a fake news editor with a Facebook profile, Twitter account, LinkedIn presence, and website) and even backstopped with fake family members, as Meta reported of one influence operation linked to Fatah—a political party in Palestine.
Gathering Information
This phase refers to any attempt—manual or automated—that an operation makes to gather information. It can range from scraping and accessing databases of stolen passwords to viewing potential victims’ social media profiles, as the Department of Justice reported in an espionage case in 2020.
Coordinating and Planning
This phase refers to the many ways in which threat actors coordinate and plan. This can range from public posts to recruitment channels on Telegram to automated and scripted posting.
Testing Defenses
An operation that experiments with different behaviors in order to see what it can get away with is considered to be testing the defenses. For example, hacking groups have reportedly uploaded their own malware to an antivirus data website like VirusTotal to see if it would be detected.
Evading Detection
Any repetitive action an operation undertakes to stay hidden can be classed as evading detection. This can include camouflaging keywords with typos—such “vaxxine” instead of “vaccine”—or deploying technical fixes, such as the Russian operation known as Doppelganger, which geo-limited the websites it was running so that they could be viewed from only one country. Unlike asset disguise, which is mostly static, evading detection is an ongoing process. For example, evading detection is the equivalent of flying an airplane under the radar, and asset disguise would be the equivalent of painting it in camouflage colors.
Indiscriminate Engagement
Operations that make no effort to land in front of a particular audience conduct indiscriminate engagement. The Chinese network known as Spamouflage, for example, posted huge numbers of low-quality memes and videos on Facebook, YouTube, Twitter, and smaller websites but made no apparent effort to reach an interested audience.
Targeted Engagement
By contrast, operations that take steps to be seen only by particular audiences conduct targeted engagement. This includes, of course, advertising but also outreach by email and direct message, posting into particular social media groups, using target-specific hashtags, and the ruse deployed by one North Korean espionage group who posed as security researchers to lure other researchers into sharing vulnerabilities and exploit code.
Compromising Assets
This phase consists of anything an operation does to take over assets—whether those are accounts, information, or money—from their original owners. This includes typical cyber exploits such as data exfiltration or account takeover, but it can also include compromises of third-party apps, accessing social media accounts by taking over the underlying email account, and even convincing real people to grant admin access to accounts they control.
Enabling Longevity
Finally, an operation that takes steps to preempt or evade enforcement is engaged in attempts to enable longevity. For example, a Russian operation that ran spoofed domains posing as European news outlets tried to register new domains when the original ones were blocked.
Looking Ahead
The defender community in 2023 is far larger and more experienced than it was in 2016, but we are all working in a highly adversarial space where the most predictable feature is that threat actors will try to be unpredictable. We should expect operations that combine old threats in new ways, as well as threats that involve completely new behaviors.
It is our hope that our kill chain will enable defenders in different parts of the community to share their analyses as old threats evolve and new ones emerge, giving us all the best chance to defend against them.
As with any framework, ours is an attempt to split complex operations into smaller elements as a way to impose a systematic view on malicious activities that, by design, seek to evade and evolve. As such, there will always be edge cases and a need for flexibility. In particular, we expect ongoing discussion around the question of when an operation evolves so much that it becomes a new entity—such as the Russian Internet Research Agency hiring people in Ghana to post about issues of race in the U.S. ahead of the 2020 presidential election.
This kill chain remains a work in progress, and we look forward to new discoveries in the field of online operations that will allow us to evolve and refine it.
Disclosure: Facebook provides support for Lawfare’s Digital Social Contract paper series, for which Alan Rozenshtein is the editor. This article is not part of that series, and Facebook does not have any editorial role in Lawfare.