Fixing the Feeds: A Policy Road Map to Mitigate Algorithmic Harms

Published by The Lawfare Institute
in Cooperation With
In the United States, state and federal lawmakers have introduced more than 75 bills since 2023 targeting the design and operation of algorithms, more than a dozen of which have passed into law. Last year, both New York and California passed laws seeking to restrict children’s exposure to “addictive feeds.” This year, policymakers in Connecticut, Missouri, and Washington state have launched similar initiatives targeting algorithmic design. At the same time, many state attorneys general are suing tech platforms for allegedly designing defective and harmful algorithms, including one lawsuit brought by 42 states against Meta over its design choices. Efforts to address the design of algorithms will continue to expand in 2025 and beyond, highlighting the importance of adopting fit-for-purpose policy approaches.
As lawsuits mount and legislation aimed at mitigating harms proliferates, the battle over how algorithmic recommender systems should be designed is heating up. Yet common policy solutions that focus on mandating chronological feeds or limiting personalization fail to address the core issue: how to design recommender systems that align with users’ genuine long-term interests rather than exploiting their short-term impulses.
A new report, “Better Feeds: Algorithms That Put People First,” authored by a distinguished group of experts convened by the Knight-Georgetown Institute (KGI), explores the research behind recommendation algorithms and proposes a more nuanced suite of guidelines that, if adopted, could transform the online experiences of youth and adult users alike.
The Problem With Maximizing Attention
Algorithmic curation has become ubiquitous across social media, search, streaming services, e-commerce, gaming, and more. A single platform may deploy many different recommender systems to power social media feeds, ad displays, comment sections, account recommendations, notifications, video and audio autoplay selections, and many other features. Recommender systems are designed to surface the “items” most likely to advance the platform’s goals, such as accounts or pieces of content. Some platforms optimize their recommender systems to maximize “engagement”—the chances that users will click on, like, share, or stream an item. Because of this design, when recommender systems structure a social media feed, choose the next video to play, or select the next ad to show, the items deemed most likely to command attention from users are ranked on top.
The problem with this approach is that a user who comments on or shares a post does not necessarily reveal their desires or preferences. Rather, these behaviors may reflect other reactions like a rise in negative emotions or impulsive scrolling that the user later regrets. This gap is key to understanding why users identify harms resulting from what they see in their feeds. In the past two decades, psychological research has demonstrated resoundingly that people’s choices do not always align with their underlying preferences. One reason this occurs is that the “self” making decisions is not unitary: In some contexts people may act impulsively, while in others they act with more thought and deliberation.
In surfacing the items most likely to maximize engagement, these recommender systems can shape human behavior. The design of these systems encourages users to behave automatically and without regard to their deliberative preferences. Platforms are incentivized to design their products in this way because maximizing the chances that users will click, like, share, and view items aligns well with the business interests of companies monetized through advertising.
A broad body of research has demonstrated that children and teens experience negative consequences from social media use in some cases and that they are often subject to engagement-maximizing tactics. The cognitive and social-emotional development of adolescents can make them more vulnerable than adults to risks associated with exposure to algorithmically curated media. While research into the effects of social media on youth continues to advance, one area of clarity is that social media use displaces healthier activities, like sleep.
Because designing recommender systems to maximize engagement is a strategy by which companies extend their products’ use, engagement-based feeds likely contribute to this harm. Indeed, adolescents often report using social media late at night and losing track of time when doing so. This can lead to insufficient sleep in children and teens, which is known to influence various other health outcomes, including the likelihood of learning problems, depression, and suicidal ideation.
The Drawbacks of Chronological Feeds
These and many other perceived harms have motivated policymakers to act, but when it comes to algorithms the approach has been binary. In both the United States and Europe, laws have sought to improve the design of recommender systems by incentivizing platforms to rank items chronologically or restricting their ability to customize feeds for each individual.
These efforts target recommender system design by placing crucial limits on how they operate. Typically, recommender systems are designed to interpret extensive signals of user behavior and use this information to predict which items from the universe available are most likely to induce engagement. Two common strategies employed by policymakers—limiting personalization and mandating chronological feeds—alter this process.
It is easy to understand the appeal of mandating chronological feeds and limiting personalization: Both are simple to understand and commonly deployed in many digital media (such as messaging and email). But research cautions against their effectiveness at mitigating algorithmic harms. Implementing chronological feeds can shift the mix of recommended items in unexpected ways: these feeds may increase users’ exposure to abuse, amplify the prevalence of political and untrustworthy items, and create a recency bias that rewards “spammy” posting behavior. Along the same vein, limiting personalization can be counterproductive because tailored content recommendations tend to enhance user satisfaction and lower barriers to finding high-quality items. Personalization, when crafted carefully, can be used to curate feeds for users in ways that further values other than engagement.
Designing Algorithmic Feeds for Long-Term User Value
The alternative approach described in “Better Feeds” promotes recommender system designs that support long-term user value. This idea encapsulates the objective of aligning recommender system design with outcomes that reflect the deliberate, forward-looking preferences and aspirations of users. For example, recommender systems can support long-term user value by asking users to explicitly state their preferences or relying on surveys or indicators of item quality selected by the users. When recommender systems fail to further long-term user value in these ways, meaningful numbers of users may regret their experiences on a platform or report a loss of self-control. This outcome reflects the fact that optimizing for engagement does not typically promote long-term user value. Design approaches that focus on long-term user value can address harms related to recommender systems while preserving the benefits that thoughtful use of engagement data and personalization can offer.
The “Better Feeds” report establishes guidelines for achieving this balance, comprising three components: design transparency, user choices and defaults, and assessments of long-term impact. While there is no guarantee that the constitutionality of legislative or regulatory efforts based on these guidelines would be upheld, the guidelines attend to concerns about the potential for regulation to implicate speech rights under the First Amendment and platform liability immunity under Section 230. Developing legislative text to support any of the guidelines would require nuance based on evolving case law.
Design transparency: While platforms currently disclose some information about the design of their recommender systems, these disclosures provide limited information about which input data matters most, and none of them describe how the success of their systems is evaluated. Real accountability requires public disclosure of all input data sources and the weight each one is given in the algorithm design, how platforms measure long-term user value, and metrics used to assess the product teams responsible for the recommender systems. These disclosures would allow outside experts to understand and compare different systems, and they would motivate platforms to optimize in ways that demonstrate their attentiveness to long-term user value and satisfaction.
User choices and defaults: If implemented effectively, user controls could be powerful tools for exercising long-term preferences. To do this, platforms must enable users to choose between multiple different recommender systems, with at least one option optimized for long-term user value, and to actually honor user preferences about what kinds of items should be in their feeds. For minors, there may be legitimate concerns that offering a choice of recommender systems does not go far enough to protect these users given the stage of their development, so default recommender systems must be designed to support long-term user value. Given that user controls are notoriously difficult to use, providing user controls in a robust and accessible manner and healthier defaults for minors are the minimal steps needed to start exposing users to feeds centered on their own aspirations and preferences.
Assessments of long-term impact: Platforms can deliver long-term value to users only if they continuously test the impact of algorithmic changes over time. Platforms can conduct these assessments by running so-called holdout experiments that exempt a group of users from design changes for 12 months or more. Public disclosure of aggregated experiment results and independent audits must be adopted for accountability. These long-term assessments would incentivize platforms to show that users who received product updates throughout the year are more satisfied and likely to stay on the platform. This creates a natural check against short-term thinking: If product changes boost immediate engagement but ultimately lead to negative outcomes, these problems will become clear when comparing each group.
The United States is far from the only jurisdiction examining approaches to mitigate concerns about algorithmic feeds. In the European Union, for example, the Digital Services Act establishes transparency, accountability, and user control obligations related to recommender systems. Because of differences in legal frameworks and jurisprudence between the United States and other parts of the world, additional measures to promote better recommender system designs that go beyond those described above may be feasible in other jurisdictions. The “Better Feeds” guidelines include additional proposals that could further incentivize more user-centric designs:
- Public content transparency: Platforms must continuously publish samples of the public content that is the most highly disseminated and highly engaged with by users and representative of a typical user experience.
- Better user defaults: By default, platforms must optimize all users’ recommender systems to support long-term user value.
- Metrics and measurement of harm: Platforms must measure the aggregate harms to at-risk populations that result from recommender systems and publicly disclose the results of those measurements.
Recommender systems play an integral role in shaping online experiences, yet their design and implementation often prioritize short-term engagement over long-term user satisfaction and societal well-being. Better recommender systems are possible. To make this happen, policymakers should focus on incentivizing designs that support long-term value and high-quality user experiences. The “Better Feeds” guidelines serve as a road map for anyone interested in promoting algorithmic systems that put users’ long-term interests front and center.