Cybersecurity & Tech

Content Moderation Through the Apple App Store

Paul Rosenzweig
Monday, April 10, 2023, 8:31 AM

Based on its preexisting norms, how should Apple respond to calls for banning Twitter and TikTok from its app store?

An Aerial View of the Apple Headquarters. (Arne Müseler, https://commons.wikimedia.org/wiki/File:Apple_park_cupertino_2019.jpg; Creative Commons Attribution-Share Alike 3.0 Germany, https://creativecommons.org/licenses/by-sa/3.0/de/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

Earlier this year, Sen. Michael Bennett (D-Colo.) proposed that Google and Apple should ban TikTok from their app stores—an effort that would have significant impacts on the availability of TikTok in the West. Last year, after Elon Musk purchased Twitter, he got into a spat with Apple. Actually, he got into two spats—one over pricing and one over Twitter’s changing approach to content moderation. These, claimed Musk, led Apple to suggest the possibility that Twitter might be banished from Apple’s App Store—though Apple did not confirm this.

In this article, I want to briefly explore the problem of nontransparent decision-making through the prism of Apple’s publicly expressed policy regarding content of apps in its App Store. Based on its preexisting norms, how should Apple respond to Bennett or to Musk?

Both situations involve instances that are readily characterized as the mediation of content in a situation where an actor within the online information ecosystem other than one of the major social media apps (like Twitter or Facebook) engages in conduct that will limit or restrict the availability of content. As Jenna Ruddock and Justin Sherman previously detailed as part of an ongoing project on content moderation, limits on content can occur anywhere in the online information ecosystem—from internet service providers to apps to payment services to security services. 

Much of the content moderation discussion, however, continues to focus on the application layer as a source of control—failing to recognize the potential value in alternative methodologies of moderation. The difference, of course, between the two approaches is that content moderation at some points in the ecosystem (and here we focus on the app stores) operates on a wholesale scale, rather than retail. Apps like Facebook can target specific posts or the content from a specific source. When an app store bans an app, however, that ban has much more widespread effects, sweeping within its ambit both useful content and content that is objectionable.

Viewed from this perspective, it is clear that Apple’s App Store and Google’s Play serve a significant gating function within the information ecosystem. If Apple, which restricts sideloading significantly, does not host an app in its store, the app is not readily available to any new consumers. Consumers who already have the app lose access to updates and improvements. In addition, Apple can make certain requirements and capabilities a condition of inclusion within the App Store system. And if an app isn’t available or usable because it fails to satisfy whatever criteria an app store owner adopts, then the app (and the content on the app) will certainly never go viral. 

Thus, as major gatekeepers to the information ecosystem, the app stores are ripe for serving as a point of intervention—if one can figure out a way of doing so consistently and without being either under- or over-inclusive. Whether the desire is to regulate the dissemination of child sexual abuse material or Chinese propaganda, app store requirements can have significant impact. In the context of child safety, for example, as Laura Draper put it, “[o]n-device app stores can requir[e] certain safety elements to be addressed before an app is made available through that store. This could induce app developers to make safety driven design decisions.” 

Manifestly, the situation becomes more challenging when the reason for banishment involves limits on content that is closer to core free expression—whether it is Musk’s Twitter or TikTok. The challenge is exacerbated because, outside of the application layer, much of what providers do is not transparent and often occurs on an ad hoc standardless basis.

The answer, though not without some doubt, is that neither TikTok nor Twitter itself is suitable for deplatforming from the App Store under current Apple guidelines. The reason, however, that doubt remains is that Apple has given itself a great deal of wiggle room within which to manage apps in its store.

Consider, to begin with, how Apple characterizes its review of the suitability of an app for its App Store. Up front, in its introduction, Apple reserves its right to limit an app as follows: 

We strongly support all points of view being represented on the App Store, as long as the apps are respectful to users with differing opinions and the quality of the app experience is great. We will reject apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, “I’ll know it when I see it”. And we think that you will also know it when you cross it.

Talk about self-proclaimed unlimited and standardless review! When Justice Potter Stewart offered that bon mot to describe obscenity, it was considered an oddity and condemned by many as reflecting a review based on idiosyncratic opinion. Here, Apple is trumpeting that idiosyncrasy as a virtue.

Despite that self-proclaimed discretion, it remains the case that neither the possible deplatforming of Twitter nor the potential ban of TikTok fits comfortably within the existing structure of Apple’s App Store guidelines.

Twitter 

Look, for example, at Twitter and the apparent decision of Musk to change controls over content, thereby allowing increasing amounts of challenging (and in many ways objectionable) content onto the app. How do Apple’s guidelines map to these changes?

Apple begins by saying (in 1.1) that apps should not include content that is offensive, insensitive, or upsetting. But it then goes on to identify a separate set of requirements that would apply to “user-generated content” (see 1.2), in which it suggests how such apps ought to be managed:

 

Apps with user-generated content present particular challenges, ranging from intellectual property infringement to anonymous bullying. To prevent abuse, apps with user-generated content or social networking services must include:

  • A method for filtering objectionable material from being posted to the app
  • A mechanism to report offensive content and timely responses to concerns
  • The ability to block abusive users from the service
  • Published contact information so users can easily reach you

Apps with user-generated content or services that end up being used primarily for pornographic content, Chatroulette-style experiences, objectification of real people (e.g. “hot-or-not” voting), making physical threats, or bullying do not belong on the App Store and may be removed without notice. If your app includes user-generated content from a web-based service, it may display incidental mature “NSFW” content, provided that the content is hidden by default and only displayed when the user turns it on via your website.

The issue, of course, is that even in its current state, Twitter would seem to meet these criteria—it has a filtering system and a reporting system. It simply has altered the definition of what it will filter and which reports it will act on. To be sure, it appears to have relaxed its restrictions on objectionable content of the sort defined by Apple in 1.1.1: “Defamatory, discriminatory, or mean-spirited content, including references or commentary about religion, race, sexual orientation, gender, national/ethnic origin, or other targeted groups, particularly if the app is likely to humiliate, intimidate, or harm a targeted individual or group. Professional political satirists and humorists are generally exempt from this requirement.” But it has left the blocking function itself almost completely unchanged—allowing users to continue to curate their own experience.

TikTok 

The case for limiting TikTok (under Apple’s guidelines) is equally unclear. The gravamen of the complaint against TikTok is twofold. 

First, it collects data and distributes information in ways that may harm and/or addict children. To the extent this is provable, the Apple guidelines do not seem to speak to the issue at all. The Kids Category is a voluntary self-certification. Nothing in the guidelines speaks to the harm of driving addictive behavior, though the guidelines do allow for the removal of child sexual abuse material.

Second (and this is Bennett’s focus), it is thought that TikTok may choose to share the data it collects with the Chinese government, which may use it for intelligence purposes, disinformation, or other, as yet undefined, activities. TikTok denies that it has done this or that it will do so in the future—thus creating a factual dispute that would require resolution.

Assuming, however, that the factual dispute was resolvable (or that Apple chose to act based on supposition and a risk assessment rather than on a factual determination), it is still unclear whether the current guidelines would be sufficiently determinative.

Presumably, the allegation would be that TikTok had failed in its obligation to “[c]onfirm that any third party with whom an app shares user data (in compliance with these Guidelines)—such as analytics tools, advertising networks and third-party SDKs, as well as any parent, subsidiary or other related entities that will have access to user data—will provide the same or equal protection of user data as stated in the app’s privacy policy and required by these Guidelines.”

TikTok might also be said to neglect its obligation as part of Apple’s App Store to minimize the data it collects: “Apps should only request access to data relevant to the core functionality of the app and should only collect and use data that is required to accomplish the relevant task. Where possible, use the out-of-process picker or a share sheet rather than requesting full access to protected resources like Photos or Contacts.”

But there are challenges with interpreting these provisions to restrict TikTok’s ability (vel non) to, for example, share data with the Chinese government. To begin with, when Apple identifies third parties, its language focuses exclusively on other commercial users—perhaps rejecting concern about sharing with a government from consideration by exclusion. 

More to the point, the guidelines go on to say expressly that, “Unless otherwise permitted by law, you may not use, transmit, or share someone’s personal data without first obtaining their permission.” Though I am no expert on Chinese law, it seems clear that any sharing that TikTok might do with the Chinese government would be permitted (if not mandated) by Chinese law. Further, there is currently no countervailing prohibition in American law. 

And so, as with Twitter, there is some significant doubt whether TikTok is, in any way, violating Apple’s existing guidelines for app developers.

Tentative Implications 

What should we learn from this analysis?

First, deplatforming from an app store is a comparatively blunt tool for regulating content compared to the retail decision-making used by social media platforms—particularly for large-scale, multi-user apps like Twitter or TikTok. For this reason, some observers see app store removal as a particularly ill-suited tactic for resolving issues that sound more related to political and national security concerns than to issues revolving around objectionable content (though even at this level, it is a less blunt tool than a complete government ban). On the other hand, as with the analogous decision by Cloudflare to withdraw services from Kiwi Farms, it may be that wholesale action is the best and more readily justified decision. 

Second, the standards for these broader interventions are not well understood. We may speculate, for example, that Apple has an implicit requirement in judging the suitability of larger apps like Twitter and TikTok—namely, that they will not be removed unless a “significant portion” of user-generated content is thought to be so objectionable that app store removal is necessary. Indeed, such a standard would explain much about the difference in treatment between actions that deplatform a small-scale cesspool of racist and violent chatter, like Kiwi Farms, and the seeming reluctance to take the more notable step of acting against Twitter or TikTok (even in the face of congressional pressure). 

The problem, though, is twofold: The “significant portion” limitation that I suspect is in play is both implicit and subjective. Thus, it falls back on Apple’s claimed unlimited discretion to act against content that is inappropriate to the eye of the beholder. 

But that is not a formula for decision-making under a transparent system of rules and results. Given the important role they play as gatekeepers, if such a limitation is implicit in app store decision-making, it ought—as a normative matter—to be explored and made explicit. My supposition is just that—mere speculation of what, implicitly, may be occurring. In the end, the lack of certainty about how Apple operates with respect to its App Store mirrors the uncertainty about how other actors in the online information ecosystem exercise their discretion over content.

There continues to be a need to study more deeply the grounds for intervention for each possible point within the ecosystem in order to begin to develop coherent information about ecosystem-wide approaches and standards. The reality is that, given the existing structures, the propriety of Apple taking action against either Twitter or TikTok is neither well understood nor transparent to the public.

That lacuna may soon change. Indeed, if Bennett has his way, it may change very soon indeed. But, for now at least, though app stores are theoretically useful venues for content moderation, they are poorly understood and thus are of limited practical utility.


Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.

Subscribe to Lawfare