DE

Decoder with Nilay Patel

The Verge

Future Regulatory Outlook and Conclusion

From A jury says Meta and Google hurt a kid. What now?Apr 2, 2026

Excerpt from Decoder with Nilay Patel

A jury says Meta and Google hurt a kid. What now?Apr 2, 2026 — starts at 0:00

Dell PCs with Intel inside are built for every moment. With long lasting battery life and built-in intelligence, you can stay focused on what matters most. Dell Technologies. Built for you. Dell.com slash Dell PCs. You can tell a lot about a person by their accent. I really do say I pack my cat and have it yard. Everyone around here says like a coffee and dog. We're so attached to the way that we sound because it tells the story of who we are. Your accent? Decoded. That's this week on Explain ItTe. Find new episodes Sundays wherever you get your podcasts. Support for the show comes from MongoDB. If you're a developer stuck fixing bottlenecks, instead of building the next big thing, then you need MongoDB. MongoDB is the flexible, unified platform that gets out of your way. It's asset compliant, enterprise ready, and built to ship AI apps fast. It's trusted by so many of the Fortune 500 for a reason. Ask any developer. It's a great freaking database. Start building at mongoDb.com slash build. Hello and welcome to Decoder. I'm Neil Patel, editor in chief of the Verge, and Decoder is my show about big ideas and other problems. Today we're talking about the landmark social media addiction trials that just resulted in two major verdicts against Meta and Google. There's one case in New Mexico against Meta, and another in California against both companies. who both have said they plan to appeal. These are complicated cases with some huge repercussions for how these platforms work and the very nature of speech in America. So to help us work through it all, I've brought on two heavy hitters. My friend Casey Newton, the founder and editor of the excellent newsletter Platformer, and co-host of the Hard Fork Podcast, as well as a Verge Senior Policy Reporter Loren Feiner. who's actually in that Los Angeles courtroom where executives like Mark Zuckerberg took the stand in the case of a 20-year-old woman Kaylee, who successfully argued that Meta and Google negligently designed their platforms in ways that contributed to her mental health issues. These cases, the first in a wave of injury lawsuits targeting tech companies. or about the design decisions of platforms like Instagram and YouTube. They argue that the platforms have fundamental design flaws that harm users, especially teenagers. and that these companies knew about these problems and were negligent in shipping these features anyway. These cases are part of a much larger set of moves that aim to fundamentally change the legal mechanisms that exist that might regulate social media. Now harm in the context of these cases isn't just addictive design. brings users back compulsively. It's also features like algorithm recommendations and camera filters that make issues like anxiety, depression, and body dysmorphia worse. This emphasis on how the platforms work, as opposed to the content, is part of a movement that has been building for years, focused on the argument that social media is not and cannot be healthy. that in fact these products might be defective, the same way that cigarettes, when used as designed, cause cancer. That's a lot of complicated ideas, and Casey and Lauren and I really spent some time working through them. The first complicated idea is whether or not there is a distinction between product features like recommendation, autoplay video, and inference scroll, and the types of harmful yet legal speech served people on these platforms using those tools, like eating disorder videos. Post designed to convince young men to hate women. But it's very difficult, if not unconstitutional, to force the companies to moderate this kind of content in specific ways. The First Amendment obviously prohibits the government from regulating what speech these companies promote or moderate. And private action is usually blocked by Section 230 of the Communications Decency Act. which protects tech platforms from being held responsible for the content their users post. So it's really hard to pull all these ideas apart. An algorithmic feed with no content in it simply isn't a compelling product, let alone a negligently defective one that causes harm. A lot of smart people who have been on this show and all over the verge over the past few years have said that these rulings are just an end run around 230 in the First Amendment. A way to make platforms liable for what ultimately is just speech, and in a way that will cause more speech to be restricted. Here the three of us talk a lot about that idea. And whether the growing calls to repeal 230 entirely have any logical connection to these cases, or whether they're just politically opportunistic. But there are many more ideas at play here, and even more layers of complication. You're gonna hear Casey and I crash out a few times in this episode. We have been covering tech regulation for so long. It feels silly to act like everything is working well for regular people who have negative experiences with social media all the time. Section 230 is 30 years old now, and it's unclear whether the world it was designed to help create ever came into existence. You'll hear Lauren talk about how some of the authors of section 230 are open to changes, particularly around AI and speech online. At the same time, any changes to 230 run headlong into the First Amendment and potentially into opening the door to government speech regulations at scale. Like I said, it's complicated, and I'm very curious to hear what you all think about this, because it's clear that a lot of things are about to be up for graps. Before we start, a quick reminder that you can listen to this episode, or any episode of Decoder, completely ad-free by subscribing to The Verge. Just go to the Verge.com slash subscribe. Okay, platformers Casey Newton, and Verge senior policy reporter Lauren Feiner. major social media lawsuits. Here we go. Lauren Finer, your senior policy reporter here at The Verge, Casey Newton, your founder and editor, platformer, and I would say forever Silicon Valley editor here at The Verge.com. I I do continue to identify as the Silicon Valley editor of The Verge, so I'm glad you feel the same way. You can check out, but you can never leave, buddy. Welcome both of you to Dakota. I want to talk about these trials that a bunch of social media companies faced in California and New Mexico. Lauren, at a high level, you were in the room for at least the trial in California. I think Snap and TikTok settled that one. They were out. YouTube and Meta just lost a jury verdict. Describe what happened in those trials and what you saw in the courtroom. record these trials were about the design decisions that social media companies make, how users are going to interact with what comes across their feeds. It was really trying to get around a problem that been going on with tech for a long time around can you separate design from content? on these platforms. That's what these trials were trying to get at and what came out at trial in the courtrooms were a lot of internal documents from these companies. In the LA case it was Meta and YouTube. And in New Mexico it was just Meta. And we saw lots of internal documents, lots of former meta employees turned whistleblowers take the stand to discuss the decisions they made and the things they saw. So. That was a lot of what we saw at the court in the courtroom and in LA, we even saw ahead of Instagram Adam Massery and the CEO of Meta, Mark Zuckerberg, take the stand. Case everyone's calling these bellwether trials. We call them bellwether trials on the verge. It the whole industry has decided this is a word we're gonna use. Can you just quickly. What that means, you've been covering attempts to regulate these companies forever, and the idea that these trials are a bellwether seems particularly meaningful here. As you know, Ne, for basically the past 20 years, companies have been able to use section 230 as a shield. And whenever there is any remotely content related challenge to any of these platforms in court, they just get dismissed out of hand. The reason that these cases are bellwethers is that if they were successful, it would open up this new front for litigation and these companies could no longer just automatically use section 230 as a shield. And that now indeed has happened, and we're expecting there will now be dozens more lawsuits proceeding along exactly these same lines. Section two thirty, uh as I'm hoping by this point, decoder listeners know, is the law that says platforms are not liable for what their users post. So if I put up a post on Instagram or TikTok that says Casey Newton is horrible, hard fork is my sworn enemy. It should be made illegal. Casey can sue me, but you can't sue Instagram. And that has always been really important because it means that whenever anyone says they're harmed by the platforms platforms can say it wasn't us. It was actually the speech. that you're mad about. in our role in distributing or promoting that speech. is actually the same as the speech itself. It seems like this trial did a better job of making that argument than attempts in the past. I'm thinking of cases like Herrick versus Grinder. There was the famous case against Snapchat with the Spidometer filter where a a teenager drove too fast trying to get a screen or photo of himself running his car as fast as he could in Snapchat. Those cases were not bellwethers in the same way. What set these apart and why was that argument more successful this time? So I think the Snapchat case was a really important precedent. It's this case Lemon versus Snap. Snapchat used to offer this filter where you could turn it on and take a video of yourself in your car and it would show how fast you were going. And plaintiffs successfully argued that this had created an incentive within the app for people to go really, really fast and do dangerous things. And indeed in this, you know, particular case, there was a a a dangerous crash. So the reason that that was important was all of a sudden the 230 shield isn't absolute, right? There have already been a couple of minor exceptions for like, you know, you can't post about the platforms have to remove like terrorism and CSAM. But now we're saying, okay, you can't actually offer a filter like this because it might incentivise a terrible behavior. This is what sort of opens up the rest of the landscape for the plaintiff's attorneys. They're able to say, like, well, what other design features are there of these platforms and what incentives are they creating? We're not gonna talk about, you know, the actual messages that are being traded back and forth on Snapchat or the actual content of the post on the Instagram feed, but we are gonna ask about things like, infinite scroll and autoplay video and push notifications that arrive continuously throughout the night and might disrupt your sleep, and all of a sudden they were able to find purchase because they had that initial precedent. The thing that really grabs me at that is Snapchat had made that filter. That was Snapchat speech. They were the ones saying, Well, if you drive fast, we'll y we'll generate a speedometer reading for you. And in this case, it's still not the platform speech. Right. You can make an infinite scroll, you can make autoplay videos, and that is just ways that they are managing the speech of others. Did they have to overcome that? Because that seems like where you would hit the 230 rocks over and over again. And say We're just managing the speech of others. It's still the first amendment. I think the the plaintiffs were able to success to successfully argue infinite scroll is not the speech of others, right? Like there's no sort of like liability of another person that gets involved here. It's you built a product and the product is defective. Right. Like they were able to successfully liken these things to like cars without seat belts. And it just really resonated with jurs. And I think it's worth taking a minute to talk about why that might be, because I think this is something that the people that I talk to at the social uh media companies never seem to understand. Everybody knows someone who has a huge problem with Instagram. This person is probably in your immediate family. They have deleted it a hundred times off their phone and they always reinstall it. They've set the screen time limits, but they keep coming back over and over again and they hate themselves for it, right? This is a near universal experience in America now. And so when you sit a jury down and you say There's something wrong with Instagram, it's pretty easy to find a lot of people who say, that sounds right to me. One of my feelings was that if any of these cases ever got to a jury the thing Casey is describing w would kick in. Like everybody has these negative experiences with these social media platforms and the companies themselves always tell us that statistically these problems are small. their user numbers are so vast that even a small percentage is many, many millions of people. I think the platform has never got their heads around that either. Did you feel the same way there that Once you put Mark Zuckerberg in front of a jury, there there was just no way that that the social media platforms would win a case. it was really hard to know like why were these jurors selected. Were they selected because they're the sort of people who don't use social media a lot or know about a lot of good experiences with social media. So I think that was the wild card in watching them was how are they really taking in this evidence? the same time it can be hard to hear some of this evidence and anyone who know someone who's been through a m mental health issue or has struggled with just using their pho much or being on social media too much. I think a lot of us know people like that. If we're not those people ourselves, that's definitely going to affect them in some way on a human level. I mean when I was Mark Zuckerberg on the stand. He was talking about a certain beaut that they had and one of his own employees pushed back on including it and, you know, talked about, I believe, having daughters and, you know, thinking about how something like this would affect. them. Maybe these people don't have as much experience with social media or don't have the exact same experiences that this plaintiff had, but they certainly know other people in their lives who've probably experienced something similar. Also seems really relevant to say that TikTok and Snap settled before the trial. Like that was the moment when I said, okay, like they must be really, really scared. And I was actually waiting for Meta and YouTube to settle as well. Once that happened, I think it was clear they were in a lot of trouble. The comparison here that everyone is making is to tobacco, to junk food, to sugar, right? We all know these things are bad for us. Nicotine is awesome, so we can't stop ourselves. There should be some regulatory framework or we should make these companies at least communicate the risks. Does that framework hold for you? One thing I think about that's kind of a big difference between this moment and that for big tobacco is that there's that saying that there's no safe cigarette. And there's a lot of studies that show that's not really the same case for social media, that some level of social media use has a positive or at least neutral effect on people. It's really that overuse, that compulsive use that is The main problem here and really the problem that people talk about. Social media does connect people with their friends, lets you stay in touch with people, lets you have social connection or connection outside of your immediate community. But obviously it also has really harmful sides to it and using it too much can cut you off from real social connection. So I think that's a big difference here. And so when people compare this to that moment, I do think that's really something we need to think about. That these aren't really one to one scenarios. That said, I think, you know, the comparison is made to pull out how these companies are finally having a lot of their documents come to light in front of juries just like happened in the big tobacco trial. So I think that is really the point to take away from that comparison. Casey, you and I have talked about this a lot. We owe our careers to social media in very real ways. The idea that the internet let us bypass gatekeepers and go reach our audiences is very important to us. The flip side of that is, boy, a lot of bad people got to a lot of bad things. How would you draw these lines? It is very tricky and you have to articulate it with some degree of nuance. To me, I separate the like internet problems from the platform problems, right? Like really, Ne like the internet is what gave us our careers. The internet is what knocked down the gatekeepers, let us sort of, you know, in my case, like hang out a shingle on the internet and say, hey, like I'll email you for money, right? Like that that is is something that did not really exist in the pre internet times. The platform problems are different, right? And they have a lot to do with algorithmic amplification, yes, but also these design features, right? This feeling we've been talking about, like I don't want to look at TikTok as much as I'm looking at I don't know how to stop. I have tried to stop or I bought some device that breaks my phone when I walk in the door, right? These are the problems of creating a platform whose only incentives will ever be to get you to look at it as much as humanly possible. And so that's why I think the scrutiny is finally drifting over to those things, right? We don't want to get rid of an internet. We don't want to get rid of your right to be able to post your opinion online. We want to get rid of this kind of machine that just increasingly seems like it's taking more and more of your time and attention in ways that make you feel bad. We need to take a quick break. We'll be right back. Mm. Mmm. Support for the show comes from Zapier. We cover a lot of trends on this show, so of course that means discussing AI. But there's a difference between talking about something and what it does in action. That's where Zapier comes in. Zapir is a way for you to break the hype cycle and put AI to work across your company. For real. Zapier helps you actually deliver on your AI strategy. With Zapier's AI Orchestration platform, you can bring the power of AI to any workflow so you can do more of what matters. You can connect top AI models like ChatGPT, and Claude to the tools your team already uses. That way, you're only using it exactly where you need it. Whether that's AI power workflows, an autonomous agent, a personal customer chatbot, or something else. You can orchestrate it all with Zapir. whether you're a tech expert or not. According to their data, teams have already automated over 300 million AI tasks using Zap. Join the millions of businesses transforming how they work with Zapier and AI. Get started for free by visiting zapier.com slash decoder. That's Z-A-P-I-E R. com slash decoder. Need anything from Tesco? Like Tesco Finest Salted Pretzel or caramelised biscuit chocolate Easter eggs? £12 each with your Tesco Club card or Tesco Finest Extra Fruity Hot Cross Buns. Two packs. For just three pounds. Because every little helps. Hot Cross Buns, majority of larger stores and online and 6th of April. Club card or app required. Exclusions apply. And I'm Adam Grant. And we're here to invite you to the Curiosity Shop. A podcast that's a place for listening, wondering, thinking, feeling, and questioning. It's gonna be fun. We rarely agree. But we almost never disagree, and we're always learning. That's true. You can subscribe to the Curiosity Shop on YouTube or follow in your favorite podcast app to automatically receive new episodes every Thursday. Support for the show comes from MongoDB. If you're tired of database limitations and architectures that break when you scale, it's time to think outside of rows and columns. Because let's be honest, you didn't get into tech to babysit a broken database. You got into it to actually build something. MongoDB lets you do that. It's flexible, developer first, asset compliant, enterprise ready, and built for the AI era. Say goodbye to bottlenecks and legacy code. Start innovating with MongoDB. There's a reason it's trusted by so many of the Fortune 500, and that's because it's a platform built by developers for developers. MongoDB. It's a great freaking database. Start building at MongoDB.com slash. built. S. We're back with platformers Casey Newton and Verge Senior Policy Reporter Lauren Finer. Before the break, we are discussing the facts of these bellwether cases and how social media platforms, in the absence of any meaningful government regulation, now find themselves unsympathetic defendants in jury trials. Much the same way that big tobacco was once dragged into court decades ago. Now I want to talk through the big implications of what happens next, and how we square the circle of social media regulation in a policy landscape that remains defined by section 230. So that is the sort of story of the case. They went up They lost. We'll see what happens next. The real turn here is what do they all do now? Right? They've been held liable for these product features. Some conversation that we should have that the industry that United States of America is gonna have about the difference between free speech and and product features. We'll come back to that. But in the meantime, they've got to do something. Right, they've gotta change something about how their products work. To avoid ongoing liability from anyone else who might look at these cases and say, We're gonna see you two. Casey, this feels like a trust and safety problem, right? This is your audience. This is the people you talk to the most. What is their reaction to this? Their reaction is is really negative. I mean, like like in particular talking to people who still work there and what they'll say is like, even if you buy the plaintiff's arguments here. Fixing this is really tricky, right? Because again, even if you believe that this individual teenager had like a horrible time looking at the at these platforms for too long and it made all of her problems worse, okay, which design feature of this platform are you going to remove and how is that going to fix her problem, right? Like If Instagram and YouTube did not have autoplay video, if it didn't have infinite scroll, if it didn't have push notifications, would that have improved her mental health to a point where she no longer would have sued the company saying this is a defective product? I don't know, right? I think that the problem that we just have as like a a society right now is we don't know what safe social media is. We don't know what features are really the most dangerous. I think we have instincts. I think there are experiments that we should run, but it's not as simple as, well, just turn off the autoplay video and all the teenagers will go play outside again. Is it as simple as none of the teenagers in Australia should use social media? Here's the thing. As like as somebody who writes more about social media than anything else, I have been shocked at the degree to which I am just throwing in my lot with Jonathan Hight. Like I'm like because like I also don't know. I do not know which the feature which are the features that we should get rid of that are gonna make all the teenagers safe. What I can tell you is nobody who works at the platforms cares enough about any of your teenagers for me to trust your teenagers with them. So I would rather say don't look at it until you turn 16 because I know that's gonna be better for you. Then them looking at it. So I think we can hear Casey, who talks to the people who work for the platform companies fully crashing out about that experience. Lauren, you talk to policymakers all day long. Nominally you are our policy reporter in DC, you cover Capitol Hill. We don't send you to courtrooms all day and all night, although that's what you've been doing. On that side of the house, what are the policymakers doing in reaction to these verdicts? So far we've seen a big push from the lawmakers who are behind some of the biggest social media r reform laws like Kids Online Safety Act saying, well, this just shows that we need these new laws or we need to repeal old laws like section two thirty in order to make kids safe. So I think that is the big push right now. I think, you know, it's still really early days though. And I am going to be really interested to see is that kind of where the momentum moves or Is there maybe even a kind of a a counterbalance to that that says, maybe let's slow down because actually the sort of cases we thought wouldn't be able to go through. through the courthouse are actually moving forward and they're doing so even with section 230 in place, even without COSA. I'm gonna be really interested to see which way that argument goes and if that kind of speeds up or slows momentum in either direction. All right, I warned you both that I was also having a crash out. about all of this. And Lauren, you you've just arrived at it. The notion that those laws have anything to do with these trials. And that these trials should let the government pass what amount to very strict speech regulations is just making me feel personally crazy. Okay, the platforms had some design features that made them addictive. So we should pass COSA, which will r restrict the speech of marginalized groups. Does not have any through line to me. Josh Holley is saying we should get rid of section two thirty and these trials prove it. And I can't tell you why that is. I cannot make the link in my brain between Okay, the platforms were optimized for virality and engagement. and negative sentiment. So making them responsible for the speech. in a way that will force them to take down more speech. is the way to solve that problem. cannot link those ideas together. Can I can either of you Truly, I have read so many of the interviews with the Republican policymakers when they get asked about this stuff, and none of them seem to understand that if they do in fact get rid of 230, platforms will overmoderate content because they will be in terror that a wide variety of things that can now linked back to them could potentially result in legal liability and they're gonna hate it, right? Like these are the guys that hate all content moderation. And if you delete section two thirty, you're gonna get more of it. So no, it doesn't make any sense. Lauren, you've covered bipartisan attempts to reform 230, bipartisan attempts to do age verification and laws like COSA. What's the view on the Democrat side? I think there are really a lot of Democrats who support COSA and really are fully on board with those kinds of changes to the law and definitely have acknowledged some of the critiques around you know, this might harm marginalized communities or make it harder to access certain kinds of content that might get politicized on the internet, but generally just think that those have been pretty much dealt with in the language of the statute. It's not really gonna come to pass and they've just accepted that they feel like this is the best way forward. I mean, certainly not all Democrats. You know, obviously Ron Wyden, who co-authored section two thirty, has not supported COSA. There really is broad bipartisan support for these kinds of issues. So I think that's going to be the challenge for of the hardliners on section 230 and against COSA right now to think like Is it that there's never gonna be anything that changes on these issues? Or is there gonna be some kind of change and we have to figure out what we can live with? Here's where it gets really complicated for me. And then you two are just gonna help me process these feelings. Together as a family. I look at okay, there's a big trial that got lost. These companies are now liable for more of what happens on their platforms in a narrow way. And now there's a group of people that want to say, You're actually responsible for everything. We're gonna tear down 230. You're responsible for the content that you're distributing, and that will lead to even more liability and maybe you're gonna take even more steps. And then I think, well, that's bad. Like taking down two thirty is bad. I've felt that way for twenty odd years. There's an infinite amount of coverage on the verge.com about why tearing down 230 is bad. And then I sit there for one more turn. And I think, well why? We we've all talked to Ron Wyden. Ron Wyden has been on the show. Lauren, I think you just recently spoke to him as well. Ron Wyden's a nice guy. Chris Cox who wrote two thirty with Wyden is a nice guy. The world that they were trying to create with section two thirty never happened. Like it literally does not exist. This law is thirty years old. It was written in a time when AOL and Usenet existed and were the dominant ways of communicating online, and their goal was to create a competitive marketplace of moderation. Where if you wanted computer to be safe for your kids, you would literally download software and run it locally on your computer and it would sit in front of CompuServe. and like filter the internet for you. And that just never happened. It never existed. So now I feel like I'm in this place where I'm required to boldly defend a thirty year old law whose policy goals were never achieved. Don't know why. Casey, I know you've been wrestling with this too. Like how How should I feel about this? I have complicated feelings too because I want section two thirty to exist so that platforms can host. Political speech, all sorts of speech. I think that it creates the possibility for platforms that are very like rich and vibrant and fun. At the same time, there's this 230 case that I paid a lot of attention to as a gay guy about Grinder. I you guys I'm sure are familiar with this case, but basically there's this horrible X. That's like I'm gonna get back at my ex by posting his photos on Grinder and I'm gonna send everyone his like physical address and say, Go to this guy's house and you know he's gonna you know indulge in your craziest fantasies and you know give you drugs. And this gets tossed out because of section two thirty, right? They sue Grinder saying, like, this is awful, you gotta do something. And Grindr is like two thirty and the case goes away. That seems really awful for the victim of that case. Like if I were in that situation, I'd be really mad at Grinder too. Why should 230 be the thing that gets that person justice? Why don't we just take online harassment and violence more seriously in this country? So this is kind of how I square the circle is by saying section two thirty in general does still support the kind of internet that I want. And for a lot of the harms, mostly not the ones we're talking about today, but for a lot of the harms that do absolutely get enabled and protected by 230, I think we can probably find other ways of addressing the harm. But here's another thought experiment. What if the brain tried over at Meta got together and said, what would Instagram look like if it were great for teenagers? Do you think it would look a lot like the Instagram that we have today? Or do you think it would look a lot different? I bet it'd be the latter, right? I bet it would look really, really different. And I think that there is a world where uh we don't live in this world, but I think that there's another world where the executives at Instagram did do that and said, you know what, like we're actually gonna put out that version of Instagram for teens and look, it's like Mostly educational content. It's actually not personalized to your teen at all. We've disabled all the communication features. You can only use it during daylight hours. You can imagine a million things that would probably just like make this a safe product. So on some level, yes, it's tricky to figure out, oh, what would like the right version of Instagram be that would not get Met into trouble. On the other And I think you actually could kind of sketch it out. So my curiosity is to what extent are they going to try to go down that road, because I'm sure they're gonna be desperate not to be sued by every teenager in America? And to what extent are they just gonna, you know, I don't know, try uh something shady and underhanded that I haven't thought of yet. They've announced like Instagram for younger people, right? Th these tools for younger people and they get for being cynical and trying to target kids. Like Can they do do they have the social capital to say this product is safe anymore? No. I I mean like my my sort of nihilistic view on this is like ultimately like what solves the meta problem is that they just like get out competed by another company that like maybe is better in certain dimensions. But yeah, I like I I don't think the change is gonna come from within from these guys 'cause all they care about is just winning and for them winning looks like maximum Uh time engaged. I mean I to be fair, Mark Zuckberg is currently busy hiring and firing hundreds of AI researchers every week. Aga again some goal that is yet to be defined. And the idea that he's gonna stop And put all of his attention on Instagram Safe for Kids. Like maybe only existential amounts of litigation will make him do that. Yeah but I I I honestly wonder if Zuckerberg is the right face. of teen safety in America? And I think the answer is just flatly no. Yeah, I don't think the track record really would lead you to putting him in charge of that particular project. Again, like and I and I think it's like important to underline this for folks to meta addiction looks like success. They have huge teams inside the company, cognitive scientists who work to understand the human brain so that they can get you to pick up your phone and look at it as many times as possible. And this is why I feel so bad for the people who are mad at themselves for all the time they spend looking at Instagram. It's like You were not in a fair fight, okay? Like you lost a rigged game. And the reason that meta is doing that is not because they're literally evil. It's that they feel like the incentives of their business require them to do this. So unless those incentives change, no Neelai, Meta is not gonna be the place to go to uh look for moral leadership on team safety. We have to take another quick break. We'll be back in just a minute. Mm. Support for this show comes from Quo. Spring cleaning can take many forms, and if you're a business owner, it's the perfect time to clean up some of the, shall we say, messier parts of your business. Streamlining your communications can be one of the fastest and easiest ways to get your company back in shape. Quo says they can help. Quo, spelled QU, is the smarter way to run your business communications. Quo says they're the number one rated business phone system on G2 with over 3,000 reviews, specifically built for how modern teams work. That's why more than 90,000 businesses, from solo operators to growing teams, rely on Quo to stay connected. professional, and consistently reachable. With Quo, your entire team can handle calls and texts from one shared number. No more missed messages or disconnected conversations. Everyone sees the full thread, making replies faster, and customers feel genuinely care for. Make this the season where no opportunity and no customer slips away. Try Quo for free, plus get twenty percent off your first six months when you go to Quo.com slash decoder. That's q you ohm slash decoder. No miss calls. No Miss Customers. There's basically been one guy in Republican politics who's argued for a regime change in Iran for years. And for America to take a proactive military role in making it happen. Ambassador John Bolton. President Trump's former national security advisor. But now. Even Bolton says Donald Trump is messing it up. Uh as far as we can tell. He did no preparation of the up opposition actually inside Iran. Uh no coordination, no effort to uh see what they would do, no no effort to support them to provide resources, money, arms if that's what they wanted, telecommunications, just just no coordination at all. And uh they don't seem prepared for it. How Trump lost the Republican Party's biggest around Warhawk. Today explain. Every weekday and on Saturdays too. We're back with platformers Casey Newton and Virg Senior Policy Reporter Lauren Feiner. We just spent a good deal of time talking about Section 230, and whether the world it was designed to create ever really came into existence, and whether or not Section 230 itself is still worth defending. But there's another complication here, an important one, the First Amendment. Let's get into it. The last piece of the puzzle, which I haven't really touched on here, but i i uh is definitely a through line is the first amendment is freedom of speech. We are talking about platforms that regulate and control vast amounts of speech from almost everybody in the country all the time. When you talk about changing the limits on these platforms and what they are liable for and how their products work, you are very directly talking about how speech is amplified and distributed in this country. There are a lot of people who have built entire businesses based on understanding how meta will make their stuff go viral. You can have a lot of feelings about what those businesses are and what they look like and what they're doing to the brains of teenagers, but there are a lot of people who built really big businesses. on the back of these platforms. Are we just gonna run head first in the first amendment here. Is it impossible? Mike Maznik, who runs Tech Dirt, he was just on the show, good friend. He thinks it's a disaster for the First Amendment. Taylor Lawrence, a friend, thin this is a disaster for the First Amendment. Their argument is you cannot separate the product from the speech. The product itself means nothing. It is the speech that the product is distributing that is the problem. And so you are just trying to backdoor your way into a speech regulation. by making the product liable for whatever harm. There's a part of me that buys this. But Casey, I know you think you can pull the two apart. I agree that this is tricky and we should be careful and lawsuits are often not the best way to work through this stuff because just in general, I would rather have like lawmakers and policymakers writing really careful, um versions of this at the same time why is infinite scroll speech why are like streaks speech like why is like autoplay video speech like at a certain point I think you can get yourself all the way to like why do we make Ford put seat belts on their car like that you're compelling speech it's like no you're compelling a seat belt Like I think you should be able to compel product safety features once it becomes clear that you actually have a product safety issue. Now, I should say there are things that I would actually love to compel these platforms to do that are just obviously unconstitutional. Like I would love to compel them to show educational content to children in the same way that Congress once passed a law saying that broadcasters needed to provide at least three hours of educational programming a week. I think that was really good for society. Turned out, at least when you apply to social media, that's just obviously unconstitutional. So I do think that you have to be really careful here, but if you're gonna tell me that every single product feature of every social media app is speech, you truly are caping for these platforms in a way that it makes me uncomfortable. Lauren, one thing that I've been thinking about a lot is what happens to two thirty in in a world where the platforms are generating more and more of the content directly with AI. Google's AI overviews. That is probably Google speech, even though it's synthesized. Do any of these regulatory regimes or attempts to change any of these laws Contemplate that problem. I think that's kind of the new wild west that we're gonna be running into here with probably new lawsuits. But I think even Ron Wyden, who we've discussed many times today, has said that AI outputs aren't necessarily protected by section two thirty. So I think those will likely be treated differently. I mean, we won't really know till we see a court case come out on it. But I think that's going to be a big question and I think thing to remember with section 230 is that it's really a procedural tool that stops lawsuits kind of in their tracks and how cases get decided in the end is based on the First Amendment. So, you know, unless you're going to get rid of the First Amendment, getting rid of section 230 doesn't really completely get rid of the problems that maybe some people think they would. I want to ask you guys what you think about something as I'm still working through this in my own mind. We were talking earlier about like, okay, well, what is the specific feature that like leads to the mental health problems suffered by like Kaylee and some of the other folks in in these bellwether cases? And I do suspect that, you know, autoplay video, infinite scroll, endless push notifications, that all have something to do with. I suspect maybe though the strongest factor is algorithmic personalization, right? It's I search for one video about how to get skinny and now all of a sudden I'm in like a nightmare wasteland of eating disorder content. And that actually does increase my depression and it leads to it increase the intensity of my eating disorder. Okay, as a society, I think we want to stop that, right? We don't want you to get dragged down that rabbit hole. We don't want you to develop that eating disorder. Can we regulate that? This is actually just the trickiest issue to me, right? Because on one hand, I could see Congress passing a law saying, Hey, if you're sixteen and younger, we just want to disable algorithmic personalization, at least at the level of the individual, right? Maybe we'll group you into a bucket and we'll say 16 year olds in America seem to like this kind of content and we're okay with that. But you personally, no. We're gonna block that for you because we don't want you to get dragged down a rabbit hole. But is that constitutional under the First Amendment? I don't know. I'm just curious what you guys make of that. I've been thinking about this a lot. And I keep Thinking back. to Barack Obama on Dakota, and we talked about regulating AI a lot. And he was talking about regulating AI with me because he felt he had failed to regulate social media. And you could see the connection in his brain. It was it was clear as day. He's like, we failed social media, we gotta get AI right. And I kept asking about the First Amendment over and over again. How you gonna get past the First Amendment? And at the end, he said, look, you just need a hook You just need to find a hook the way that we found a hook to regulate broadcast television. In the case of broadcast television, the hook is very obvious, right? There's only so much spectrum. It's a scarce public resource. So we can make some regulations to make sure we make good use of that resource. And you can immediately see the danger in that. which is that Brendan Carr has power over broadcast television. And now we have an unrestrained speech regulator in this country. That's not good. At the same time, the idea that Barack Obama's like you just need a hook. is a reflection of the standard in the law, which is call strict scrutiny. And you can do a speech regulation under the First Amendment if it's narrowly tailored to achieve a compelling government purpose. These are the words in the precedent. Strict scrutiny, narrowly tailored, compelling government interest. Well, I don't want a bunch of sixteen year old girls to get eating disorders. Feels like a very compelling government interest you can attach a very narrowly tailored rule to accomplish. And I'm very curious. If that is the future. where we're gonna say this stuff causes harm. Here's one rule. this content. You can detect this content. With the power of AI, Mark, you can now detect all those GPUs, detect the eating disorder. Get rid of those communities. I think that's just as bad. Right, that's just as bad as Brendan Carr unrestrained speech regulator. That's just a bunch of government speech regulations. But if two thirty prevents Mass litigation against the platforms. Right, because as Lauren's saying, it's a procedural mechanism that says you can't sue us at all. If you have to dance through these hoops of Well, it's product design features, but no one can identify the specific product design features I think a bunch of state regulators are going to say, look, there's some stuff we know is bad. And we're gonna we're gonna pass those laws. And we're gonna take those to this Supreme Court and say these are narrowly tailored to meet a compelling government interest. And I don't know if that's how that will play out. I suspect it's going to start. And I certainly don't know if that's good. Like I'm but I you can see that that is the next escape hatch here. Because that is the standard for a law that regulates speech in this country. Casey, I think that's exactly the right question about algorithms because I think it's much easier to make the argument that, you know, infinite scroll or autoplay, it's not really about content. It's not really even much of a decision by the platforms, but what a company chooses to program their algorithm to recommend or not recommend. Those are kind of their deliberate choices. And, you know, we've already had a Supreme Court decision saying that like content moderation is basically editorial discretion. So I think that's where it gets really tricky. And I think you're right, that is kind of exactly the sort of thing that people who are advocating for these changes want to see change, but it's probably the trickiest one to do. Nowdy Robertson wrote this for us a while back. It was just a piece on how America turned against the First Amendment. This notion that we all care about free speech and Everyone says it and then you kind of push on it. And everyone wants a little bit more speech regulation than before. And that has only been growing over time. Even the people that are like, I love Elon We're we're watching in the Elon Sam Altman trial text from Mark Zuckerberg to Elon Musk saying, I'm deleting all content. that identifies the people in Doge. And Elon's like, great. Do you want to buy open AI with me? Like Mr. Free Speech Warrior is like, yeah, delete that stuff. And Mark is saying I will never ever cave to the government again and he's emailing the government employees saying I'm deleting the names of government employees. This is crazy to me. And it just seems like we are entering a period where there's more pressure from the government on speech than ever before. Everyone is a little more okay with it than ever before. And we are all still pretending like We all care about free speech the most. Casey, that feels like a nightmare in the trust and safety context. You wrote, I think. At the beginning of Trump two. on how trust and safety was out of favor and no one was pushing back anymore. That was a while ago. What does it feel like now? So I I wrote this piece uh and the headline was Is anyone left to stand up for trust and safety? You know, trust and safety used to be a really vocal part of the tech industry and they advocated for a lot of good pro social civic values. They talked a lot about human rights. They tried to bake human rights principles into the policies that these platforms observed when they were moderating content. And so I just sort of had a natural affinity to them, right? Like in my view, these were the good guys. And then Trump gets swept back into power, a bunch of layoffs happen. Every platform decides almost without exception that their best move is to try to curry favor with the Trump administration. And all of these folks just get pushed aside. The ones who were the most vocal about human rights principles disappear. And all of a sudden you have people like Joel Kaplan at Meta running the policy operation. And his main job is just to essentially Donald Trump to like Mark Zuckerberg and try to ensure that we get, you know, whatever we want. And it's been hugely effective for them, by the way. Mark Zuckerberg has gotten an insane number of things from Donald Trump, and I'm sure he'll get more, you know, as the years go on. So I got a lot of pushback from the trust and safety community when I wrote this piece because I was essentially calling them out, just being like, Hey, like w like where are you guys? Like are you actually gonna get on a microphone anywhere and say, Hey, it's really bad what is happening to our industry? And what they told me very justifiably was like We do not have the power that you think we have. And also when we do speak up and when people do know our names, we get death threats and we get hounded to the ends of the earth and it's really scary. And you're asking us to sacrifice like maybe even our lives to like speak out in in favor of these principles. That's like kind of a big ass. So All of that is fair. And yet Fast forward to almost a year later now. I think the question still stands. Like what happened these people stopped speaking out was they just gave free reign to the oligarchs to run these platforms as they see fit. That's a really scary thing to me is that trust and safety really is no longer meaningful at any of these platforms except as a compliance function to keep them in line with various regulations. And the result is now you just have a bunch of oligarchs trading favors over signal. Lauren, I want to end with you. Obviously the regulatory side of this is just in full throttle right now. They have something that at least shows that Meta is bad, that YouTube is bad. You can make some moves. What do you think happens next on on on that side of things? I think we're gonna see a lot of discussion in Congress about, you know, whether to pass these new laws to repeal section two thirty. But I think, you know, where we've seen most of the action has been in the states. I think, you know, we'll probably continue to see that move forward. I think in the courts we'll see these cases be appealed. And at the same time, we're gonna see new cases brought. There's still, you know, in the LA case, there's I think over 1500 cases behind that. Um there's several more bellwether trials just in that set of cases that are already scheduled the next one I think is gonna be in the a few months. And then there's a totally different set of bellwether trials in a federal version of these cases with the first one kicking off in June. And that's, you know, a school district. And there's school district, state AGs, individual plaintiffs. So this is really not going to slow down at all and I think if nothing else, um, what these trials have done is just to light a lot of this information about how these companies work and I think you just brought more awareness among the general public about, you know, what to be thinking about and aware of when their kids are using social media. It does feel like just a a perfect description of the experience of being in America right now. They're gonna set just a mishmash of policies across the country until everyone pays enough money to the lobbyists. To get a law passed. That like solves the problem. That just feels like at once the most nihilistic, cynical thing I can say and also just how everything works all the time. Do either of you see an off ramp from that? I mean, like recent history would suggest that like no, there's not really an off ramp because again, like all the incentives are the are for these companies to get you to look at their app for as long as they can get you to do that. And so until the pain of those incentives is worse than the benefits of the revenue that brings in and what it does to their stock price. Don't see Big change coming. Lauren, did policymakers sense they're trapped in this kind of doom loop. I feel like the policymakers who've decided that COSA is the way, repealing Section 230 is the way, you know. that is with their focus. You know, I don't think there's kind of this new discussion about how exactly should we do this. We have seen some newer approaches with things like app store age verification and there's kind of different variations on how that could potentially work, whether it's real verification or assurance. In general Policymakers have chosen what they think the solution is. And that's how this conversation is going forward. And I think if people want to change, you know, what are the mechanisms of that conversation, they're really going to have to inject new solutions or think differently about the incentives here. Here are my three ideas, just to end with. I'm curious for your thoughts. One, I think a pr uh federal privacy law. is long overdue. insults the first amendment. Two, I think Casey to your point about algorithmic personalization.

This excerpt was generated by Pod-telligence

Listen to Decoder with Nilay Patel in Podtastic

Podcast Listening Magic

All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.