Cybersecurity & Tech Surveillance & Privacy

Lawfare Daily: Social Media Data Practices, with the FTC’s Jacqueline Ford and Ronnie Solomon

Justin Sherman, Jacqueline Ford, Ronnie Solomon
Thursday, November 14, 2024, 8:00 AM
Discussing the FTC's new staff report on social media.

Published by The Lawfare Institute
in Cooperation With
Brookings

On this episode, Lawfare Contributing Editor Justin Sherman sits down with Jacqueline Ford and Ronnie Solomon, attorneys in the FTC Division of Privacy & Identity Protection, to discuss the FTC’s new 6(b) staff report on the data practices of nine social media and video streaming companies, from Twitch to Discord to YouTube. They discussed the report’s findings on data collection, retention, and use practices, and cover the privacy impacts of these practices, their intersections with FTC regulatory powers, and what the report authors recommend next.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Jacqueline Ford: Overall, we found that the companies we looked at failed to adequately police their data handling. And we found that even though these are the entities that are in the best position to implement privacy protective measures, they often did not.

Justin Sherman: It's the Lawfare Podcast. I'm Justin Sherman, contributing editor at Lawfare, with Jacqueline Ford and Ronnie Solomon, attorneys with the Federal Trade Commission's Division of Privacy and Identity Protection.

Ronnie Solomon: There's a potential for harm, and I think this is especially true where they're inferring information that users haven't chosen to provide in the first instance, which means companies could potentially be weakening the effectiveness of and subverting user choices about their data.

Justin Sherman: Today we're talking about a new staff report from the FTC on social media and video streaming companies, data privacy and security, and algorithms.

[Main Podcast]

Why don't we start by you both telling us about yourselves and how you came to work in the FTC's Division of Privacy and Identity Protection.

Ronnie Solomon: Happy to start. So I'm Ronnie Solomon. I'm an attorney in the Division of Privacy and Identity Protection. I've been with the FTC for about eight years. I actually began my legal career at a law school at a law firm in Silicon Valley, and worked in private practice there representing tech companies. But I always wanted to work in the public sector, do public service, and always kind of had that in the back of my head, that that's something I would pursue.

About five years in, I made the move. I had always followed the FTC and been a big fan, switched to the FTC San Francisco office, got to work on all kinds of consumer protection and competition issues there, developed an interest and expertise in privacy. And about three years ago, I switched to our privacy division here in DC, and it's been great. Really enjoy, love the work that we do here and everyone at the FTC is just super dedicated and it's a really great place to work.

Jacqueline Ford: And I'll give a little disclaimer for both Ronnie and myself that, you know, all FTC folks usually give, and that's that the views expressed here are ours and not necessarily those of the FTC or any commissioner. So, how I came to the Division of Privacy, Identity, Protection, or DPIP as we call it.

I actually came to DPIP straight from law school, which I think is a little rare. Before law school, I actually wanted to do work that related to mental health. Prior to law school, I was a case manager for adults with severe mental illness. When I went to law school, I tried out some of that work, and I didn't love it as much as I thought I would, and so I actually got turned on to the FTC's Bureau of Consumer Protection summer law clerk program. And I came for a summer, worked as a summer law clerk for all the divisions in the Bureau of Consumer Protection. I was lucky enough to get asked to come back, and when I came back, I came permanently back to DPIP.

So it's been great. It's been about 11 years and I've been pretty much with DPIP that whole time. I took one year to be a counsel to the BCP director, so that gave me some insight on what the other divisions in the Bureau of Consumer Protection are doing. But otherwise, you know, just been in DPIP and working on various cases and investigations and the 6(b) report now.

Justin Sherman: Perfect segue. So we're going to dive into that report today. This is a staff report, the FTC recently published, co-authored along with your DPIP colleague Ryan Meem, titled “A Look Behind the Screens, Examining the Data Practices of Social Media and Video Streaming Services.” So it's a fascinating report, and we're going to get into the details and the various findings and recommendations in a minute, but first I want to talk about the process for this report.

So, Jacqueline, you just mentioned 6(b), so why don't you explain to us what it means that the report is written based on 6(b) orders, as they're called, from 2020 and why did the commission decide to issue those orders and this staff report?

Jacqueline Ford: So that's an interesting question to start with, and I hope, I promise that it won't be too legally or like a lawyer this whole conversation, but…

Justin Sherman: That's what we love here, so go for it. 

Jacqueline Ford: I'll start with a little bit of foundation because I think it's important. So the FTC has a really unique power under the FTC Act, specifically 15 U.S.C. Section 46(b), and we refer to it as Section 6(b). And this particular section gives the FTC authority to require entities subject to our jurisdiction, to respond to requests for information or documents. And we call those requests orders, 6(b) orders.

So one important thing to note is that the materials we receive in response to 6(b) orders cannot be directly used for civil law enforcement. So we use it to just really examine an industry and most often we'll have a report after. So, going back in time a bit, in December of 2020, the Commission voted to issue 6(b) orders to nine entities that provide social media or video streaming services.

So what did the Commission request? The Commission requested information on how these nine collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens in particular. So, as to your second question, Justin, why did the Commission decide to issue the 6(b) order and subsequently the staff report that just came out last month, the Commission decided to issue the orders because such social media and video streaming services play an important role in our daily lives and culture.

And while they play an important role like providing places where people can connect, create or share content, these types of services have also been at the forefront of building the infrastructure for mass commercial surveillance. And this vast surveillance has come with serious costs to our privacy.

So given the FTC's unique legal authority here under Section B, the concerns in this space, the Commission decided to pursue this study. So, we issued the orders, the companies responded, staff, including Ronnie and I, reviewed the responses, and this report is the culmination of all that work. So, in that, in the report, we can't specifically call out what specific companies are doing because of confidentiality restrictions, so if a reader will notice that comment findings are aggregated and anonymous.

However, the Commission still decided it was important to issue this report because it still allows us to share what we learned in ways that we hope can be of use to policymakers and industry.

Justin Sherman: Thanks for that. I think that's important framing. And as you said, including with the fact that, you know, specific companies are named as being included, but not necessarily all of the details of the conduct.

So with that said, the report looks at Amazon, which owns gaming platform Twitch. It also looks at Facebook, YouTube, Twitter or now X, Snapchat, ByteDance which owns TikTok, Discord, Reddit, and WhatsApp. So, obviously the industry of social media and video streaming companies is much bigger than these nine players. So, what does focusing on these nine, in particular, tell us, as readers of the report.

Ronnie Solomon: Yeah. So that's a really good question, right? Why did we choose these specific nine?

So we really wanted to identify those companies that would give us a really good cross section of social media and video streaming services. And we really wanted to cast a wide net. We also wanted some of the largest participants with the most users that could give us the most useful data in making comparisons and conclusions across participants in this study.

You know, if we had issued orders to every single industry participant, you know, the resources and time that would have been required to review so many productions, you know, that would have been pretty unworkable. So, we think we landed the right balance. We ended up with a variety of platforms that are widely used by consumers, including children and teens. Everyone from Facebook to Instagram, Snap, services that are focused on short form videos like TikTok, video streaming services like YouTube, Twitch, and even discussion based communities like Reddit.

So that was a little bit, it's a little bit about our thinking as to how we chose these nine specifically.

Justin Sherman: With those nine, the report has four main sections: data collection. advertising, algorithms, and children and teens. So, let's walk through each of those sections, starting with the first, with data collection.

So talk to us about what you found in reviewing these companies data collection practices, and just to call this out, because I thought it was super interesting, talk to us about your findings on indefinite retention of data about consumers?

Jacqueline Ford: Yeah, so the first section of the report on the data practices is quite lengthy, and we have a lot of findings but overall, we found that the companies we looked at failed to adequately police their data handling, and we found that even though these are the entities that are in the best position to implement privacy protective measures, they often did not.

Now, this is a pretty broad statement, so I think it helps to take a closer look at some of the specific findings we point out in the report, and which I believe helps support why we don't think they were consistently making consumers' privacy a priority.

First, the company has generally collected vast amounts of data about users and non users from a variety of sources on and off the platforms. And they generally use this data for a variety of purposes, including inferring even more information about consumers.

A second specific finding is that most of the companies reported sharing consumers personal information with affiliates or other company branded entities. But very few of those companies had a specific approval process or any additional contractual language that governed such sharing. This is really concerning, especially where a lot of the corporate entities are incorporated in various jurisdictions like China.

A third finding is that most of the companies share consumer information with third parties, and in our review of the materials, several were incapable of, or unwilling to, identify all instances in which such sharing occurred, and the purpose behind such sharing. Most reported that they required contracts prior to any sharing, but, like, some did not. And again, our review found that many of the third parties that received data were located outside of the U.S., so this again raises concerns about the potential of U.S. consumers' information being exposed to potential collection by foreign governments.

A fourth finding is that while all of the companies reported implementing data minimization principles, very few had actual written data minimization policies. So, obviously, attention there. A fifth finding is that several companies would anonymize, pseudonymize, de-identify or aggregate data instead of deleting it when the data was no longer needed, and this is what would happen either upon user request to delete data or when the deletion was initiated by a company. So we thought that was very interesting.

And finally, the services’ privacy notices were frequently lengthy, vague and generally unhelpful. We do note this in the report, but I think it's worth emphasizing, is that Commission staff was often unable to decipher such policies and notices and clearly ascertain actual practices, which says something because, you know, this is what we work on every single day.

As to your second question, you're correct. We did have some very interesting findings with respect to the company's practices and data retention. Our review found that very few companies had transparent and specific data retention policies. Generally, the companies reported that they would retain data for three periods of time.

One, for as long as there was a business purpose. Two, until the user actively deleted the data. Or three after a set period of time. With respect to retaining data as long as there is a business purpose, we found this promise to be illusory. The companies were unclear on what a business purpose was, so it begs the question, what is a business purpose? What are the limitations to that? And when does such business purpose end?

This leads us to believe that the companies could use this retention limit based on a business purpose as a means to indefinitely retain consumer data. The report notes that this concept of retaining data as long as there is a business purpose is especially concerning now with the ever increasing presence of AI, knowing that many entities could argue that training AI models is a quote unquote business purpose that justifies retaining consumer data indefinitely.

Justin Sherman: I also want to call out a graphic on page 21, which is one of my favorite parts of the report. You have a graphic, showing eight of the different ways, and of course I say eight of, because we could certainly, you know, imagine drawing a graphic with many more, but this graphic shows eight of the different ways that companies collect the kinds of data, Jacqueline, that you're talking about, are covered in the report.

And so this ranges from companies gathering data directly from user inputs, to passively gathered information, to companies tapping into ad trackers and data brokers. And so you found that the nine companies are sometimes using these sources to collect data, I'll say, who's surprised, but not just their own customers, but people who are not even registered users.

So what does this kind of collection mean for consumers?

Jacqueline Ford: So you're right, Justin, that that graphic is not exhaustive. It shows the eight primary ways that from the company's responses to the 6(b) orders, we saw data being collected, but that's by no means to suggest that this is it, and there's no more collection beyond what's represented in this graphic.

But one of the things that the graphic highlights is that we found that the company's collected data about consumers' activity on the platforms, but also about consumers’ activity off of the platforms. The other thing it highlights, which you noted, is that the data collected is not just about users of the platforms but also information about people who are not registered users. So, for example, passively gathered information. For some services, a non-registered user can still visit a platform, and information about that individual will be passively gathered, even if they don't register or provide any other direct inputs.

Where it gets a little more concerning is that companies can still collect information about consumers who are not only not registered users, but they've never even visited the platform. So, for example, most companies with an advertising service purchased datasets from data brokers or allowed advertisers to import customer lists for advertising purposes.

This means that for some consumers who have actively avoided interacting with social media and video streaming services, these entities may nevertheless have information about them. So, this is frightening for these individuals because it suggests that there's nothing you can do to avoid your information being shared or otherwise disclosed to some of these platforms.

So in terms of what it means for consumers, and I know we'll talk about this later, but we have some recommendations for policymakers and industry that we think would help address some of the concerns that are raised by these practices.

Justin Sherman: So, next up in, we mentioned these four categories of the report, the next one is advertising. So, we've all heard for years now about digital advertising and related privacy and cybersecurity issues.

What did you find when you wrote this report section on advertising, was there much new in there? Or was it sort of highlighting important historical themes and trends that are still happening today?

Ronnie Solomon: Yeah. So when it came to advertising, first, I'll say, I think, you know, that people get advertisements on the internet isn't really a novel concept, but what this report does is really open the hood of how this all works underneath the surface. It's all behind the scenes, under the surface, and completely out of view to the consumer.

So, first I'll start by saying, you know, something that may be obvious to many, but, you know, advertising is what most of these companies' business models are based on. Generates billions of dollars in revenue while offering zero price, or as they would say, free services. And they do this by monetizing user and non-user data.

It's worth noting that not all of the companies had what we called a digital advertising service, but most did. And for those that did, most of their revenue was derived from serving these ads, including targeted ads. When we talk about digital advertising services typically what we're talking about are business to business services that consumers don't interact with directly, but that cater to third-party advertisers, allow them to advertise on the platform based on user's personal information.

And, you know, as Jacqueline mentioned, it was based on the data that came from both on and off the platform. So the companies allow advertisers to target people based on a multitude of data points that they collect from users from all different sources. You know, demographic categories, including categories that the platforms infer about you: your location, again, activities you're doing elsewhere on the internet, information from data sets they purchased from data brokers and other third parties.

And, you know, this report points out that part of how this tracking for advertising happens is these companies publish pieces of code, these ad tech-tracking tools, pixels, and SDKs, that advertisers are able to make,  sort of, integrate into their website or platform and everything that a user does on that website or app is transferred back to the social media company.

Again, this is something that's completely out of view to the user and allows companies to give their advertisers the ability to do hypergranular targeting for targeted ads. Obviously, this raises implications for sensitive data points, right? Things like your race, your religion, your sexual orientation, your political affiliation. There's, the list can go on and on.

And the companies claim they prohibited targeting based on these sensitive categories. But what the report noted was that there was overall a lack of consistency about which categories were considered sensitive, and a lack of consistency in how the companies described prohibited forms of, you know, sensitive targeting throughout their policies. And obviously this lack of consistency doesn't benefit the consumer.

The report also talks about the fact that teens are non-immune from targeted ads. So, companies noted that, you know, where they allow children under 13 to make accounts, they didn't permit targeted ads to be targeted toward child users under 13.

But it was a totally different story for teens, people aged 13 through 17, with companies allowing targeted ads for those teens, but just saying that they limited the types of targeted ads that teens could receive.

So those are some of the highlights of kind of what we talk about in our advertising section. And again, I think what we did in that section was really open the hood and show users, show readers, and our audience in more detail, really, how this all works under the surface.

Justin Sherman: The next two, the last two sections of the report, one on algorithms. You mentioned some findings there. The other one on children and teenagers provided additional insights on top of what you mentioned into data modernization, automated analysis of data, and much more.

So starting with that algorithm bucket, what were the major takeaways?

Ronnie Solomon: Yeah, so I think the first big one was the companies rely heavily on the use of automated systems, everything from algorithms to data analytics to artificial intelligence and all of its forms. Automated systems really dictate most of the user experience, everything from the content that you see to search activity, again, advertising as we talked about, inferring personal details about users that's used for advertising. So automated systems really were at the forefront of sort of how these companies operate the platforms.

And again, these automated systems were used to predict or infer details about users and their lives. Things like their interests, their daily habits, their families and relationship statuses, information about their employment and income. Those are just some examples of things that the companies infer about you and attach to you as a user, creating new and potentially sensitive data points about users that users may not have chosen to provide in the first instance.

So a few high level observations. The use of people's personal information by these automated systems was by default. Users didn't have the option to or right to opt in or opt out. There was overall a lack of control and transparency in that users couldn't control and really couldn't understand or see how their personal information was being used by these automated systems.

And this is obviously especially concerning when we talk about inferred information, data that's being purchased from third parties by the platforms, or data that's being derived from users off-platform activity. Again, these are all things that users might not know is happening and they don't have any visibility into what the companies are collecting.

The report also noted that the monitoring and testing processes were inadequate. We found that there were differing and consistent and inadequate approaches relating to monitoring and testing the deployment of automated systems, and, you know, there's a backdrop here of potential for real harm, right? Where companies are inferring new and additional personal details about individuals.

Again, as I mentioned, things about your family, your interests, your income, your personal relationships, your lifestyle details, there's a potential for harm. And I think this is especially true where they're inferring information that users haven't chosen to provide in the first instance, which means companies could potentially be weakening the effectiveness of and subverting user choices about their data.

You know, let's say you're divorced and that's something you don't want to share on your social media profile, but the companies are inferring that about you anyway and attaching it to your name. You know, that's something that's undermining users' choices about what they choose to provide to companies and on the internet.

So that's kind of a high level, but I would definitely encourage everyone to read the report because there's a lot of interesting stuff about the company's use of automated systems in our report.

Justin Sherman: Certainly. And per usual with the Lawfare Podcast, we'll link the report in the show notes. So folks can actually go look at what you mentioned, the details of the report, as well as the graphics and some of the other items we're flagging.

So, with that done, then to the last section, the children and teenagers section, talk to us about the major takeaways and particularly since we're talking about privacy and algorithmic targeting and other issues around children and teenagers, was there anything that shocked you the most when conducting the analysis and writing up that section?

Jacqueline Ford: So yeah, first the major takeaways with respect to children. I think the top line finding is that the social media and video streaming services bury their heads in the sand and ignore reality really when it comes to child users. Most services claim that because their services are not for children and they don't allow children to create accounts, that there were no child users.

But we cite to research in the report that indicates that approximately 40 percent of children between the ages of 8 and 12 use some form of social media. So there's a, there's clearly a disconnect between what is happening and what the services say they think is happening. Seems like the companies could know more about child users if they wanted to. Several companies reported to us that they inferred users' age ranges, but most said that they did not infer age ranges below 13 years old. This appears to us to have been a deliberate choice, as there is not a technological impediment to inferring age under 13 versus other age ranges. So it was definitely very interesting.

The major takeaway with respect to teenagers is also the thing that shocked me the most. And that is just how much the services treated teen users like traditional adult users. Of the services that allowed teens to create accounts, they all collected personal information from teens in the same manner as they collected personal information from adult users. That alone, I think, is just incredible that there was no change in the information collection practices between teen and adult users.

And only about half of the companies that allowed teens to create accounts implemented additional protective measures specific to teen users. But the most common thing that they implemented and said that this was specific to teen users was preventing or limiting access to adult content or adult features.

Some other privacy protective measures, like imposing stricter privacy settings by default, were less commonly applied by the companies with respect to teen users. So I think that's really the most shocking thing to me. And we discussed it in the report.

Justin Sherman: With that said, so, and as you talk about within that section, there's the issue of COPPA, right? The Children's Online Privacy Protection Act. And, as you noted around issues such as the age of users on different platforms, there really are important questions about COPPA and the representations that companies made to the commission about whether children use their platforms.

So, can you first tell us what is COPPA for, certainly many listeners are familiar, but for those who might be less familiar? And second, what were the companies telling the commission about their own COPPA compliance, and why does this matter as a regulator?

Jacqueline Ford: Somehow, I'm still the one answering all the nerdy lawyer questions and giving you cites, but here I go. So, COPPA is the Children's Online Privacy Protection Act, which is a federal privacy law which applies to children online. Without going into too much detail about it, it was enacted in 1998 and the FTC was asked, well, tasked through the act, rather, with putting together regulations pursuant to this act.

So that resulted in the COPPA rule. And the COPPA rule applies to operators of commercial websites and online services directed to children under 13 years old, or where the operators have actual knowledge that they are collecting, using, or disclosing personal information from children under 13. So once an entity knows that the COPPA rule applies, there are various requirements and things that have to be implemented, including obtaining what we call verifiable parental consent from the parent before collecting personal information online from children.

And violations of COPPA come with hefty civil penalties. Currently, they're just shy of $52,000 per violation. So now that you have a very quick primer on COPPA, we can bring it back to the report. And obviously, since we enforce COPPA, the Commission was interested in hearing about the company's practices with respect to COPPA and how they comply with it in this 6(b) order.

So it's interesting because the companies told the Commission through their responses that because their services were not directed to children and because they didn't have actual knowledge of child users, that COPPA didn't apply to them. This was especially clear where companies were using AI to infer the age of users, but said that they didn't infer ages below 13, and presumably they did this to avoid COPPA liability because if they had inferred ages below 13, then one could argue that they would have actual knowledge of child users and thus kind of put them under COPPA liability and be required to implement things that are required under COPPA.

So basically the company said COPPA didn't apply to them either because the platform wasn't directed to children or they didn't have actual knowledge, and that means that the COPPA requirements weren't being implemented.

So, to answer the question you posed of why this matters: this matters here because we have a federal privacy law. meant to protect children, and it's not being applied in the context of social media that we know kids are using. So altogether it really was eye-opening and I think is another very important finding in our report.

Justin Sherman: Absolutely. And I'm going to really double down on the nerdy lawyer questions. I'm just as, I think this is interesting as you're saying, given the scale of the platforms and how many users, including how many child users they have, you say $52,000 per violation is like explained to us just quickly. How is that measured and in practice, let's say you're one of these companies with 10 million or, you know, 2 billion users, like, how does that kind of fine scale up?

Jacqueline Ford: So it can scale up really fast because, you know, we can base it off the number of child users, or children involved, how many instances the personal information was collected. So, you know, you realize in calculating, you know, 52,000 times. You know, x, and x being the number of violations that can go up very, very fast. And so it's really it's another important tool that the FTC has is the civil penalty authority under the COPPA rule.

Justin Sherman: Thank you. And given all of these findings and these issues generally, whether it's the data retention problems that you mentioned or representations about kids’ data, what legal authorities does the FTC have in this area to conduct enforcement?

And, looking at past cases that are public, what kinds of practices in this area has the FTC challenged as deceptive or unfair?

Ronnie Solomon: So I'm about to give Jacqueline a little bit of a run for her money on nerdy lawyer answers. You know, we at the FTC enforce several laws that apply to, you know, the platforms that issue initial report and companies more broadly. So the main one, kind of our bread and butter, is Section 5 of the FTC Act, which essentially prohibits unfair or deceptive acts or practices, which is intentionally a broad statute and applies to most companies in our economy, with a few exceptions.

So when we talk about privacy, for example, we talk about deception, a deceptive practice, we're talking about misrepresentations about your, to users and consumers about your data privacy and data security practices. So, Section 5 prohibits companies from making false or misleading statements about how companies collect, use, share, retain, and secure customers' personal information. And here, when we talk about Section 5, it's both important what companies say to consumers and also what they fail to tell consumers.

So, it's important to comply with Section 5 to tell consumers the truth and the whole truth about your data privacy practices. Section 5 also touches on unfairness. When we talk about unfairness under Section 5, the law says it's a balancing test, and it looks at practices that are likely to or do substantially injure consumers, and whether or not there are countervailing benefits to those practices.

And so, just to give some examples of things that the Commission has said meets that test of being an unfair practice: things like selling consumer sensitive location information, including to profile, surveil, and target them with advertisements; making retroactive changes to your data privacy or security practices or policies without notifying consumers and getting opt in consent; collecting, using, and sharing consumer sensitive personal information, things like health information, location information, without their knowledge or consent; failing to implement measures to prevent the unauthorized disclosure of information; having poor data security practices. So those are some examples of things that we challenged as unfair under Section 5.

In addition to Section 5, there are other rules and laws that we enforce that could apply to the platforms at issue in this report. Obviously the main one being COPPA, which we just talked about, and as Jacqueline mentioned, you know, those rules are important because they give us the ability to get civil penalties.

The FTC has used its enforcement powers against the platforms in this report. Just to give one example, in 2022, we took action against Twitter, now known as X, for deceptively using account security data for targeting ad purposes. That resulted in a $150 million penalty and a permanent injunction. And we've also obviously brought action enforcement actions against companies like Facebook or Meta Platforms, just kind of another example of how we use our Section 5 and other authorities against companies in this report, just to give a little primer on how things work at the FTC.

If we suspect there's been some kind of a violation of the law or rule that we enforce, we may send the company a civil investigative demand for things like documents, written responses, or seek oral testimony. We conduct a thorough investigation, and if we determine there's been a violation of one of the laws we enforce, we may bring a complaint in federal or administrative court that results in either a settlement or a lawsuit. So that's a little bit about what we do here in terms of enforcing the law.

Justin Sherman: And the retroactive modification too. I know the FTC has, as you said, issued statements about that. I constantly am seeing that as definitely an area of important work as well, like you're saying, given pushes for AI training and so forth and how policies are rewritten after the fact.

So as we start to wrap up here, I want to turn now to the recommendations, because this isn't just an analytical report, but there are concrete things that you all recommend are done out of these findings. And so, can you highlight some of those top level recommendations and anything you think is highest priority?

Jacqueline Ford: Yeah, so as you said, the report includes staff recommendations that are based off of the report's findings and our hope is that these, and our primary intent, is that these staff recommendations be used to inform decisions made by policymakers and companies who provide social media and video streaming services.

So, the recommendations are broken apart by the substantive sections of the report that we've already gone through. So, I will highlight some of the recommendations in the data practices, in children and teen section, and then I'll pass it over to Ronnie, but, you know, really encourage everyone to go read all of the recommendations. We're just, we're not going to say them all right now, just in the interest of time but we do think that they're all equally important.

But starting with data practices, one recommendation there was that the social media and video streaming services implement more concrete and enforceable data minimization and retention policies, which would include developing, documenting, and adopting concrete data retention policies that include clear cut and definite retention periods for each type of user data collected that are tied to the purpose for which the service collected the data.

So, this kind of ties back to what we were talking about earlier with the business purpose and is a recommendation in part meant to address that. I'm now moving on to the children and teens section. One of the recommendations there is, and I'll focus on the teen users since that was, for me, one of the most shocking things. But one of our recommendations with that was that the social media and video streaming services do more to protect teen users of their services.

And we listed out some examples of what they could do. And there are four, and the first is designing age-appropriate experiences for teen users. The second was affording teen users the most privacy protective settings by default. The third was limiting the collection, use, and sharing of teen users data, which I think is really important because, as I noted earlier, the collection of information from teen users was the same as that of adult users, which is incredible. And the fourth recommendation we had was to only retain teen users' personal information for as long as necessary to fulfill the purpose for which it was collected.

So now I'll pass it off to Ronnie to highlight some of the other recommendations.

Ronnie Solomon: So when it came to advertising, one of the recommendations that we have in the report is that social media and video streaming services prevent the collection of any kinds of sensitive information through the ad-tracking technologies that they distribute to advertisers.

And that the platforms that when they receive this sensitive information for advertising or marketing purposes, take steps and do more to prevent the use or onward disclosure of that sensitive information. When it came to AI, we had a number of recommendations, but, you know, as I talked about earlier, companies were collecting all kinds of personal information from various sources that were being ingested into the automated systems, and there was a lack of access, choice, control, transparency, explainability, interpretability, relating to how they use these automated systems, and we, our recommendation is that companies do more to address that.

And make these systems more easily understandable and, you know, give users more choice when it comes to the use of their information in automated systems.

Justin Sherman: These recommendations are really important, I think, to reflect on. Separate from those, what happens now that the commission has issued the report?

Are there any next steps out of the report? And do you anticipate the Commission may conduct any more 6(b) studies with other tech companies looking at similar privacy or security issues.?

Ronnie Solomon: Yeah. So I'll answer the first part of that question in terms of next steps. And turn the other part about other 6(b) studies over to Jacqueline.

So, you know, this report is part of an ongoing conversation about commercial data practices of social media and video streaming companies. We hope our report sheds light on aspects of social media that policymakers, consumers, the public may not have known about. And that this is sparking conversations about something most of us use in our daily life. As part of this conversation, we hope policymakers and companies will carefully consider and implement all the recommendations that we lay out in our report.

Jacqueline?

Jacqueline Ford: Yeah, and with respect to 6(b), the Commission 6(b) authority and whether or not it will be used with other tech companies, I think the quick answer is yes.

We have used that authority provided to us under Section 6(b) and we will continue to do so. Besides the 6(b) orders that were issued and led to this current report, we have two other 6(b) orders that were issued in the relative recent past that I'll highlight for you. The first was in March 2023, the Commission issued 6(b) orders to Meta, TikTok, YouTube, and Twitter, seeking information on how the platform screened for misleading ads for scams and fraudulent and counterfeit products.

And more recently, in January of this year, the Commission ordered 6(b) orders to Alphabet, Amazon.com, Anthropic, Microsoft, and OpenAI seeking information regarding recent investments and partnerships involving generative AI companies and major cloud service providers.

So these are two examples, but go to show that the FTC continues to use this unique power that it has, and it will continue to do so, among other things, to look at the practices of powerful tech companies.

Justin Sherman: That's all the time we have. Thank you both for coming on.

Jacqueline Ford: Thank you for having us.

Ronnie Solomon: Thank you so much.

Justin Sherman: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Isabelle Kirby McGowan of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; a senior fellow at Duke University’s Sanford School of Public Policy, where he runs its research project on data brokerage; and a nonresident fellow at the Atlantic Council.
Jacqueline Ford is an attorney in the Division of Privacy and Identity Protection at the Federal Trade Commission.
Ronnie Solomon is an attorney in the Division of Privacy and Identity Protection at the Federal Trade Commission.

Subscribe to Lawfare