Cybersecurity & Tech

The Role of Norms in Internet Security: Reputation and its Limits

Jesse Sowell
Tuesday, May 8, 2018, 7:00 AM

Who maintains the security and stability of the internet—and how do they do it? It’s a simple question, but a difficult one to answer. Internet security, writ large, comprises a diverse set of social and technical tools and an equally diverse set of industry norms around mitigating and remediating abusive behavior. Those tools are developed and used by what I term operational security communities—groups of individuals, largely unaffiliated with governments, that do the day-to-day work of maintaining the security and stability of the internet.

Published by The Lawfare Institute
in Cooperation With
Brookings

Who maintains the security and stability of the internet—and how do they do it? It’s a simple question, but a difficult one to answer. Internet security, writ large, comprises a diverse set of social and technical tools and an equally diverse set of industry norms around mitigating and remediating abusive behavior. Those tools are developed and used by what I term operational security communities—groups of individuals, largely unaffiliated with governments, that do the day-to-day work of maintaining the security and stability of the internet. What these communities actually do, and the scope and nature of the challenges that they face, is often poorly understood, even among sophisticated state actors. But one of the key mechanisms on which operational security communities rely is a surprisingly familiar one: reputation.

Let’s begin with a simple statistic: At least 90 percent of the email that transits the internet is unsolicited and unwanted—what the operational security communities refer to as “abuse.” For the most part, though, these messages do not appear in user mailboxes. Mailbox providers—internet service providers such Comcast and Verizon and webmail providers such as Yahoo! Mail and Gmail—take the necessary actions to stop abusive messages at their borders, protecting both end users and their own infrastructure. In protecting users from these illicit messages, these providers are part of the anti-abuse community—one subset of the broader Internet security industry.

Email may be the internet’s original “killer application,” but it is notoriously difficult to secure. What’s more, it’s one of the easiest ways for a bad actor to infect a computer with malware—it’s what’s called an “infection vector.” (Cryptolocker, a strain of ransomware circulated by email attachment, is a textbook instance of this.) Because of email’s prominence and its vulnerabilities, efforts to secure email characterize the canonical challenges facing security professionals every day. The first challenge is how to install safeguards that will not get in the way of everyday use. The second (often an unintended consequence of the first) is how to avoid creating bigger problems when users discard or circumvent “safeguards” that do get in the way of everyday workflows—which further obscure the root causes of the security problem.

The anti-abuse community leverages reputation-based mechanisms to mitigate both of these problems. By reputation, I mean just that: When it becomes clear to the anti-abuse community that abusive actors are using a particular network to send abusive messages, that network garners a bad reputation. Rather than placing security burdens on the end user—like requiring users to maintain a spam filter themselves—reputation mechanisms block messages at the border to avoid exposing end users to abusive messages (and their malicious payloads) and denying malicious actors their prize.

Reputation is not an exotic or esoteric tool, but a common, powerful enforcement mechanism that is often overlooked. It plays out tacitly, and often informally, in the behavioral norms of rural communities, amongst firms vying for market position, and in transnational politics. In each case, reputation is a form of soft power that operates at the interstices of community best practices and the black letter of the law to enforce norms that fill the gaps or avoid the transaction costs associated with conventional legal processes. A classic instance from rural communities comes from Robert Ellickson’s “Order Without Law.” Cattle ranchers in Shasta County California have well-defined informal norms and processes for managing common resources. When cattle stray or destroy private property or commonly accepted rights-of-way, ranchers choose not to invoke formal legal processes to mediate conflicts, but instead turn to informal norms and reputation for compliance with those norms.

At its core, reputation in messaging works much like in any other community. When an actor repeatedly violates an accepted norm, members of the community proximate to the violation notice. And while the internet is vast, at the level of email and routing infrastructure operations, the security operations community is a surprisingly small, tight-knit community—much like rural communities that rely on informal norms. In particular, the gatekeepers to messaging are much more like a rural community—they all know each other. They all share information about where they see abusive messages coming from. And they all have common incentives to block abusive messages: protecting end users and reducing the operational costs of malware infections.

The cornerstone of reputation is trusted professional relationships. To operate, as a transnational enforcement mechanism, though, reputation must also scale. The anti-abuse community has substantive access to information about networks originating abuse: tools facilitating user reports of abusive messages, malware detection tools, and honeypots. In fora like the Messaging, Malware, and Mobile Anti-Abuse Working Group and the Anti-Phishing Working Group, these professionals have developed best practices and supporting tools for collecting information about not only where abuse is coming from, but also how to share that information quickly and effectively in support of ongoing mitigation efforts.

There are also what I will call “reputation aggregators” (often referred to by the community as blocking lists), which offer neutral, third-party scaling mechanisms. Mailbox operators provide these actors with real-time data feeds about which networks are sending abusive messages. When these reputation aggregators see the same kind of abuse, originating from the same network, reported by mailbox operators all over the world—and also see similar abuse from their own honeypots—they have sufficient evidence to issue a public reputation advisory. In effect, the reputation aggregator publicly announces, “We have seen sufficient evidence, from diverse sources, that network X is consistently sending abusive messages. We recommend you block messages from network X until we remove this advisory.” In turn, mailbox operators around the world use these recommendations to block messages from abusive networks, protecting end users while staying out of the way of everyday workflows.

Reputation is not only an enforcement mechanism—it is also a signaling mechanism. In some cases, an actor new to the messaging industr, who is not yet part of the anti-abuse community, simply may not know the rules. A reputation “ding” is one way to signal that there are rules, and that there are also consequences to violating those rules. The anti-abuse community publishes its best practices and is quite willing to help newcomers find their way; once the newcomer eliminates the source of abusive messaging, reputation aggregators will recognize the change in their data feeds and remove the blocking advisory. In the successful case, these new entrants embrace anti-abuse norms as good business practices. As they develop their own monitoring tools, the newcomers soon become good remediators: networks that quickly recognize when they have an abusive actor on their network, often before reputation aggregators, and quickly resolve the problem. Reputation aggregators recognize this credible commitment and often opt to send these actors private signals rather than publishing public advisories. Such a private signal reduces impact on the good remediator, improves industry relations, and ultimately improves the good remediator’s bottom line.

Not all actors that know the rules are credibly committed, though. Rather than embrace anti-abuse norms, some actors are satisficers: They see best practices as exogenously imposed costs and view the cost of remediating as greater than the cost of losing business, even if that line of business creates costs for others in the messaging industry and risks for end users. Reputation aggregators deal with satisficers with graduated sanctions. As the satisficer persists, the reputation aggregator recommends blocking more and more of the satisficer’s network infrastructure, imposing costs not only on abusive actors but also on that satisficer’s legitimate customers.

Satisficers only respond when these customers complain about service interruptions. Satisficers react to resolve the immediate problem, repairing their immediate reputation, but do little more. It is only a matter of time before they take on another abusive client, and the cycle starts again.

Ultimately, this is a losing strategy. Legitimate customers have a low tolerance for service interruptions and take their business elsewhere. If the satisficer does not recognize this trend in time, they can be left with nothing but abusive clients. In the worst case, the satisficer gives up on legitimate messaging and becomes a haven for abusive actors, shifting their efforts from satisficing to the rules to concerted efforts to circumvent these rules in order to maintain a revenue stream. They promulgate messages whose value is exclusively derived from malicious payloads such as those supporting phishing, ransomware, and botnet campaigns. T

In the case of the well-intentioned newcomer, we see the success and benefits of reputation. In the case of the satisficer, that ultimately becomes a haven for abuse, and we see the limits. Reputation can guide a marketing firm that doesn’t yet know the rules, and it can deny abusive actors access to legitimate markets for messaging infrastructure. But it does not remediate the root cause of that abusive behavior, the dedicated criminals that use these infrastructures for illicit economic and political gain.

When reputation mechanisms reach the limits of their efficacy for disciplining such recidivists, other mechanisms come into play. Reputation aggregators, often in concert with others in anti-abuse community, reach out to law enforcement (LE) to supplement their efforts, offering intelligence on criminal activity that state actors can leverage in evidence-collection efforts supporting prosecutions and takedowns necessary to ultimately remediate abuse. This cooperation does not always come to fruition, of course. But there are a number of well-known instances—Zeus Gameover and Cryptolocker, DNSChanger, and Avalanche—in which the members of the anti-abuse community, the broader cybersecurity community, and law enforcement have acted cooperatively, working together towards a common goal.

Reputation mechanisms have a variety of benefits. First and most obviously, they are a powerful mechanism for monitoring and enforcing norms in a seemingly chaotic internet. But there are other benefits as well. Reputation is not only an enforcement mechanism, it is also a nuanced mechanism for initial signaling: it is just as much about opening a dialog about best practices as it is about enforcing norms. Reputation mechanisms also demonstrate that industry actors can collaborate to create a collateral public good—and as implied in the last section of this analysis, these actors also know when to reach out to state authorities to supplement their private enforcement capabilities. Such lessons suggest that rather than continuously looking to high politics for solutions that promote internet stability and security, perhaps policymakers should shift our gaze to the parts of the web that society takes for granted, and take lessons from those institutions, processes, and norm entrepreneurs underneath the hood that continuously work to ensure we all have the safe and secure online experiences we expect.


Jesse Sowell is a Postdoctoral Cybersecurity Fellow at Stanford’s Center for International Security and Cooperation. His research focuses on the non-state institutions that ensure internet security and stability. Analytically, Sowell's work combines internet operations, industrial political economy, and operations strategy to understand the supply and demand of institutional and organizational capabilities and capacities that make these institutions effective. Ongoing work explores how these transnational institutions engage with conventional state actors. He earned a Ph.D. from MIT’s Engineering Systems Division in 2015.

Subscribe to Lawfare