DE

Decoder with Nilay Patel

The Verge

Compelled speech and AI tool building

From Anthropic doesn't trust the Pentagon, and neither should youMar 12, 2026

Excerpt from Decoder with Nilay Patel

Anthropic doesn't trust the Pentagon, and neither should youMar 12, 2026 — starts at 0:00

Dell PCs with Intel inside are built for every moment. With long-lasting battery life and built-in intelligence, you can stay focused on what matters most. Dell Technologies. Built for you. Dell.com slash Dell PCs . Security program on spreadsheets, new regulations piling up, and audit dread? It's time for Vanta. Vanta automates security and compliance, brings evidence into one place, and cuts audit prep by 82%. Less manual work, clearer visibility, faster deals, zero chaos. Call it compliance or call it compliance. Get it? Join the 15,000 companies using Vanta to prove trust. Go to vanta.com slash com. Security program on spreadsheets. New regulations piling up. An audit dread. It's time for Vanta. Vanta automate security and compliance, brings evidence into one place and cuts audit prep by 82%. Less manual work, clearer visibility, faster deals, zero chaos. Call it compliance or call it Calmli Fance. Get it? Join the 15,000 companies using Vanta to prove trust. Get started at vanta.com/cal m. Hello and welcome to Decoder. I'm Neil I Patel, editor and-in-chief of The Verge, Decoder is my show about big ideas and other problems. Today we're gonna talk about the messy, fast-moving situation at Anthropic, the maker of Claude, that now finds itself in a very ugly legal battle with the Pentagon. The back and forth is complicated. But as of a few days ago, the Pentagon had deemed Anthropic a supply chain risk, and Anthropic had filed a lawsuit challenging that designation, saying that the government was violating its first and fifth amendment rights and quote, seeking to destroy the economic value created by one of the world's fastest growing private companies. I can tell you right now, we're gonna be talking about the twists and turns of that case here on Dakota and on the verge many, many times in the months to come. But today I want to take a step back and really dig in on one very important part of this situation that hasn't gotten nearly enough attention as things have spiraled out of control. How the United States government does surveillance, the legal authority that allows that surveillance to occur, and why anthropic was so distrustful of the government saying it would follow the law when it comes to using AI to do even more surveillance. My guest today is Mike Masnik, the founder and CEO of Tektor, the excellent and long-running tech policy website. Mike has been writing about government overreach, privacy in the digital age, and other related topics for decades now. And he's an expert on how the internet and surveillance state have grown up in interconnected ways. You see, there's what the law says the government can do when it comes to surveilling us. There's what the government actually wants to do. And most importantly, there's what the government says the law says it can do. Which is often exactly the opposite of what any normal person simply reading the law would think. You'll hear Mike explain in great detail in this episode that we cannot and should not take the United States government at its word when it comes to surveillance. There's just too much history of government lawyers twisting the interpretations of simple words like target to expand surveillance in complicated ways. Ways that usually only cause concern in legal circles and only bubble up to the mainstream when there are huge controversies like the Snowden revelations. But there's nothing subtle or sophisticated about policymaking in the Trump era. And so with Anthropic, we are now having a very loud, very public debate about technology and surveillance in real time, on the internet, in blog posts, and posts on X and press conference soundbites. There are positives and negatives to that, but to make sense of it all, you really have to know the history. That's what Mike and I set out to explain in this one. Whatever your views on AI and the government, this episode will make it clear that both parties have let the surveillance state get bigger and bigger over time, and we're on the cusp with the biggest expansion yet when it comes to AI. Before we start, a quick reminder that you can listen to this episode, or any episode of Decoder, completely ad-free by subscribing to The Verge. Just go to the Verge.com slash subscribe. Okay, Tech Turt, founder, and CEO Mike Mastic on Anthropic, the Pentagon, and AI surveillance. Here we go . Mike Mazdick, you are the founder and CO of Tech Dirt. Welcome to Decoder. Yeah, glad to be here. I'm excited to have you on. I was just saying I'm shocked you've never been on the show before. You and I have been writing and posting around each other for a long time. A lot of the Verge policy coverage owes a debt to what you've done at Tech Dirt and then what's going on with Anthropic is so complicated but hits so many themes that you have covered for so long. I'm glad you're finally here Aaron Powell It is a a complicated mess of a topic, but uh I'm I'm excited to be uh digging in on it. Aaron Powell So what I want to focus on with you is not the details of whether Anthropic is gonna sign a contract with the government or whether OpenAI is going to get that contract instead. I'm confident between the time we record this and the time people listen to it, there will be there will have been more tweets and more things will be different than they were before. What I want to focus on is just one of the two red lines that anthropic has really laid out. The one of them is autonomous weapons. The law there is a little bit more nascent, whether or not the weapons even exist or have already been deployed by Russia and the Ukraine war. I just want to set that aside because I I think it that is gonna come into more focus all on its own on its own schedule. The other red line that I do want to spend a lot of time on is mass surveillance. Yeah. And there's quite a lot of law here about mass surveillance. There's a lot of history, a lot of controversial history. The entire character of Edward Snowden exists because of controversies around mass surveillance. And it all comes down to, I think you are the one who posted this. The National Security Agency, which is part of the Department of Defense, which we have to call the Department of War now for some reason. We don't have to do anything. We we don't have that's true. Here in America, we don't have to do anything. But the National Security Agency has basically redefined what a lot of words mean out of like colloquial English to mean, oh, we can just do surveillance. And then every so often there's a scandal when people discover that they're just doing surveillance. So just set the stage there. And I don't want to rewind you all the way, but it's been quite a lot of time uh where this pattern has repeated itself. Yeah. And and yeah, it it sort of depends on how deep you want to go. But the the sort of short version is obviously in the post-9-11 world, we the US passed the Patriot Act, um, which had, you know some ability for the government to engage in surveillance which was supposed to be for you know protecting us against future terrorist threats and the over time that got interpreted in interesting ways, and there were some limits on that. We also had the FISA court, which is a special court that is supposed to review the intelligence community and their activities, but uh has been traditionally a one-sided court. Only one side gets to plead their case to that court, and it's all done in secret. Um, and so there's a lot of stuff that that was not known. And then there was one th other piece in all of this which goes all the way back to Ronald Reagan, which is Executive Order twelve triple three, um, which is supposedly about setting out the rules of the road for intelligence collection. And so you have these three sets of laws, well, you know uh a few sets of laws and an executive order that to the public, the parts that you can read, seem to say certain things about what our government and the NSA in particular can do in terms of surveillance, which, when read with a plain English dictionary, the nature of which you and I probably have and understand, we would come away with a belief that the NSA's ability to surveil Americans was was very limited. In fac t, you know, to the point that they're supposed to uh, you know, if they realize that they are surveilling a US person, that they are supposed to immediately stop and cry foul and you know, erase the data and all of this other stuff. Um and there were rumors for a while that that was not really happening. And there were hints, and in particular, Senator Ron Wyden was very vocal about, you know, going on the floor of the Senate and saying, you know, something is not right here, and I can't quite tell you what. Uh, or in hearings he would ask intelligence officials, are you or are you not collecting mass data on Americans and they would either deflect or in some cases outright lie. And I believe it was one hearing in like 2012 with James Clapper, who was the director of national intelligence at the time? He was asked directly on this point, and he basically said, No, we don't collect data on Americans. And that was a big part of what inspired Ed Snowden to leak the data, the the reports that he leaked to Glenn Greenwald and uh Bart Gelman and Laura Laura Poitriss as well. So, from all that, what we began to discover was that the NSA has its own dictionary that is somewhat different than the dictionary that you and I use, such that they can interpret words in ways that are different than the plain English meaning of them, including words like target, which feels like kind of a key word. But so you know, a a sort of broad understanding of what this is. In theory, they're only supposed to target people who are not US persons, I think is the phr ase. Um but the way it had been interpreted over time was that uh anything that mentions that person, any anything that is about a foreign person is now fair game, even if that is the communications of a US person. So if you and I were to text each other and mention a foreign person, that is now fair game for the NSA to collect and to keep and to store. There's a second part of this. I mentioned uh first the the executive order 12333 from Ronald Reagan, which effectively allowed the NSA as the technology changed over time and the internet grew, it allowed the NSA to tap into foreign communications, but that included any communications that maybe left the US on route somewhere. So if you know, if I'm texting you and a message went from me in California through a fiber optic cable that happened to leave the US, the NSA could put a tap in the part once it's outside the US, collect that information, even if it was just going to you within the U S. And then what they could do is keep that information, even if it was on US persons, and they could do specific searches on that later, sometimes referred to as backdoor searches. So they collected this information that we believe they weren't supposed to collect in the first place, but they could keep it. And they promised, they sort of pinky swore that they would keep it private. But if they did uh a search and found like that you or I mentioned a foreign person, then suddenly it was fair game for them to do whatever they want with. In total, that has turned into a world in which they can basically collect any information that happens to touch outside the US, and even if it is entirely between two US persons, if they mention or even hint at someone who is not a US person, suddenly it is fair game to be collected. And from that, we've gotten what appears to be a form of mass surveillance of US persons by an NSA that claims and publicly states that they do not spy on U.S. persons. How did we get to this point? Right. This is a lot of incremental baby steps. You mentioned James Clapper in 2012, that's the Obama administration. You mentioned Ronald Reagan, that's the 80s. Like we're we're we're going through Democrats and Republicans here, right? The war on terror happens in the in the George W. Bush administration, 9-11, and the Patriot Act happens in the George W. Bush administration. This is a lot of incremental bad things under presidents of both parties, under congresses of both parties. How did this happen? I mean, uh the the simplest form Because that that makes them look bad, right? I mean, obviously they also want to protect Americans, right? That's part of part of their job, I guess. Um and so, you know, if you have an intelligence community that is basically operating in darkness, because that's what intelligence communities do, and they keep coming to you and saying, you know, hey, you know, if we could just get access to this information, it'd be really helpful in preventing a terrorist attack. And there may be cases where that's true that the intelligence community is able to use this information in a way that works well. But we also are in theory a society of laws and a constitution that we're supposed to obey. But that allowed for the fact that, you know, administration after administration, again, Republican and Democrat had lawyers who were very clever and who would look through. Well, you know, if we sort of position this this way, or we state this this way, or we interpret this that way, we can get what we want. And not technically break the law or not technically violate the fourth amendment. The assumption was always like, well, we can sort of bend the law or bend our interpretation of the law, and nobody's really ever going to see this, or nobody who cares, you know, is really ever going to see this, and therefore we'll get away with it. We need to take a quick break. We'll be right back . Support for this show comes from Linked In. For small businesses, every hire matters, but the time and resources required to hire right are limited. Luckily, LinkedIn Hiring Pro is built for that reality. It's your hiring partner designed to help you hire with confidence by servicing only the right candidates without turning hiring into another full-time job. Posting a job isn't always the hard part. It's finding, connecting with, and screening the right candidates. Hiring Pro streamlines the entire process from drafting your job to shortlisting candidates and conducting AI-powered interviews for initial screenings. Conversational interface lets you describe what you need in plain language, no recruiter jargon needed. Nearly 60% of hirers find a candidate to interview within a week. With Hiring Pro, you spend less time searching and more time connecting with the right talent. Hire right the first time. Post your first job and get $100 off towards your job post at LinkedIn.com slash partner. That's LinkedIn.com slash partner. Terms and conditions app ly. Once upon a dismal day, Bob's ice cream van looked gloomy and gray. Although he had big ambitions, his socials lacked creative vision. That bad. Maybe vampid epitaph? I have an idea. Bob launched Canva and got into gear. Create the video in the vampire team and make it the funniest amount. It went viral. Bob's business a revival. Now imagine what your dreams can become. When you put imagination Once upon a mundane morning, Barb's Day got busy without warning. A realtor in need of an open house sign. No, fifty of them. And designed before nine. My head hurts. Any mighty tools to help with this plight? Aha! Barb made her move. She opened Kelna and got in the groove. Both creating canvas sheets. Create 50 signs fit for suburban streets. Done in a click, all complete. Sweet. Now imagine what your dreams can become when you put imagination to work at canva. com. The world moves fast. Your workday? Even faster. Pitching products, drafting reports, analyzing data, Microsoft 365 Copilot is your AI assistant for work, built into Word, Excel, PowerPoint, and other Microsoft 365 apps you use, helping you quickly write, analyze, create, and summarize. So you can cut through clutter and clear a path to your best work. Learn more at Microsoft.com slash N365 Copil ot. We're back with Tech Dirt founder Mike Masnik, talking about mass surveillance in the history of the United States government, manipulating language to get around our constitutional rights . There's two things there that really jump out at me. One, you know, you and I both read a lot of court decisions and appellate court decisions and Supreme Court decisions. And there's a fight in our Supreme Court about how to how to literally interpret the words in our statutes and our laws. Uh and I won't get too far into it, but I I would say generally the idea that you should just read the words on the page and do what they say is the dominant strain of statutory interpretation in the United States, right? Like left or right, that they both say it. They might they argue about some very esoteric fine points of what that actually means. But you should just be able to read these words and do what they say. That's not up for grabs. We've landed on at least that first pass of what you might call textualism. How do lawyers of both administrations get this far away from the dominant mode of legal decision-making in our country. That both justices of both parties both agree is at least the first step. I wish I knew the exact answer, but you know, uh what I think it is, is it's motivated reasoning, right? I mean, as a lawyer, you you are there to sort of defend your client. And the, you know, the success, if you can call it success of our legal system tends to be based on having an adversarial situation where you have different sides arguing over these things where you kind of, you know, the role of the adjudicator is to narrow in and figure out which side is actually correct. One of the problems with the intelligence community in the setup of it is that you don't have that adversarial situation. And so that makes it easier for one side to justify the argument that they're making because nobody is really pushing back on it. And so you combine that with the you know overarching fear of another terrorist attack, anything related to national security. So even when you have situations where you have like the FISA court, I mean the FISA court was somewhat famous for effectively being a rubber stamp for many years. I forget the exact uh numbers, but it was something like nine over 99% of applications that went to the FISA court for you know uh to allow for surveillance of certain situations were granted. And, you know, it's easy to just say, well, if 99%, that's obviously too much. Obviously those bringing claims to the to the court, you know, they're they're picking and choosing. They're not, for the most part, bringing totally crazy claims. But without that adversarial aspect and with a very strongly motivated group of people who don't, you know, who think we need to do this or are being told by an administration we need to do this, they'll find ways to do it. Um and you know, and that's that's where you end up over time. Has there been anyone involved in this process who's ever woken up and said to themselves, boy, we've managed to redefine the word target to mean anything we want . I I I mean, there I obviously you had like Ed Snowden who leaked a bunch of documents. You had um uh uh John Napier Ty, who wrote a piece for the Washington Post in twenty fourteen, I believe, which sort of revealed the interpretation of Executive Order 12333 and said that's the real issue to pay attention to. You have other people who have sort of spoken up about the these things, but for the most part, the people who are involved and working, you know, within the administration on intelligence community stuff are sort of bought into that the view of the intelligence community, which is the overriding goal is to protect the country from something bad, and the best way to do that is to have as much information as possible. And like it's easy to be sympathetic to the the argument that yes, having more information may allow them to, you know, catch something earlier or find something important. But, you know, one, that might not be true. Getting too much information is probably just as bad as too little information because it can often hide the information that is actually useful and the the the information that you actually need to determine something. But also, like we have a constitution in the first place, and we have reasons why in theory we're not supposed to allow for mass surveillance without probable cause. And as a country that believes in the rule of law, we should be able to live up to that. And when all this stuff happens in darkness, you will tend to lose sight of that. Aaron Powell I think this brings me to Anthropic. Anthropic is primarily an enterprise company. They're good at the government, right? They they they've built those muscles. They're staffed by people who are really well versed in some of this stuff. They obviously looked at Pete Heggseth saying we want all lawful uses. And they went two levels of interpretation down and said, well, your literal belief is that these words do not mean what they say they mean on their face. And so all lawful uses is too big. And we want to put some guardrails, particularly on mass surveillance. Again, I'm I'm going to bracket out autonomous weapons. That was the other red line, but particularly on mass surveillance, Dario Modet is out there saying we can we can do too much, this is too dangerous, this is a fourth amendment violation. And there's the tension there is you're saying you're gonna comply with these laws that say one thing, and they actually now after all this time they mean something completely different and we just don't want to be part of that. That's the fight. I just want to compare that to Sam Altman who you know swoops in to say we'll do all lawful uses and then post this long message being like, here are the all the laws we're going to comply with. And it seems like he didn't know how the NSA had reinterpreted these things and kind of got taken for a ride. And he's been since walking it back. Like I'm saying, even as we are recording, I'm confident there are more tweets and everyone's positions have changed. And Sam has been walking it back slowly. But it does seem like OpenAI, Sam Altman, got roped into reading the statutes on their face and believing what they said. Is that your interpretation of events as well? Aaron Powell I think there's two possibilities, and that's one of them. One is that he got played the the same way that the public got played for for many years. The alternative theory, and I have no idea which one of these is true, is that he or some of the the lawyers at OpenAI, who I think are very competent and very knowledgeable, knew this but thought that they could play the same game that the NSA played for, you know, a few decades, in that as long as they say these things and then they say the words, but they don't reveal the actual interpretations, that they could get away with it too. So Sam comes out with the statement that makes it look like you know we had the exact same red lines as anthropic did and the government was great with that. In fact, I think Sam put it that they had a third red line that Anthropic had two red lines and OpenAI had three and the government was perfectly fine with it. And that like left a lot of people sort of scratching their heads. But I think I think it's it's it has to be either that Sam and whoever was surrounding him didn't understand how these how these work in practice, or they did and they just assume that that the public wouldn't know and therefore they could get away with it. The other thing that comes to mind, again, AI is new and it it's so tempting to come at new technologies is though these are problems of first impression. Yeah. No one's ever had to think about this before. But the reality is everyone's been thinking about this stuff for a long time. Maybe the thing that's new here is not AI, but that the second Trump administration, instead of doing a bunch of lawyering that maybe no one will ever read to justify their actions to a secret court that no one's paying attention to, they're just not that subtle. They're not that sophisticated. And they're just saying they're going to spy on everybody all the time. Yeah. They just announce their intentions in a way that maybe all administrations should just announce their intentions and see where the the the chips fall. But I'm looking at okay, there was Ed Snowden here in New York City, there's uh ATT runs a building that everyone knows is an NSA building. Like it's just a giant building and we're supposed to pretend it's not an NSA surveillance center, but like it's right there. It's huge. None of that seems to have come to anything. Right? Like all of these revelations, these leaks, we haven't backed it off. In in fact, it's only increased as so much of our lives has gotten more more and digital. And maybe the Trump administration being such a blunt instrument at all times, that might actually be the thing that causes the reckoning. Do you see that playing out anyway? Yeah, I mean I think there's a few different things there. And it it's not entirely true that we haven't backed off this stuff at all. I mean, the revelations did lead to some changes within how these things happen. And there there are now um I forget what they're called, but they're like these civil amicus people within the FISA court that will act as a um, you know, uh presenting the other side on certain issues. We've seen some of the authorities limited in certain ways and they, you know, come up for reauthorization every so often and and people uh activists have been very aggressive and pushing back and trying to put some more guardrails on. But to the larger question, I think there's two different things. I think I think you're half right in that the this administration is not subtle and is and just says out loud the things it it shouldn't and that all we're at war with Iran. It's happening. Right. Yeah, it's like we're not even gonna try the dance. Yeah. But they haven't really said that directly about surveillance, especially surveillance of Americans. You know, there's been hints of it, but they haven't come out as strongly on that. So I'm not entirely sure that it's that. The other half of it is I think maybe has to do more with anthropic's positioning and the general view of AI as this possibly existential technology where anthropic has always presented itself as we're the sort of thoughtful good guys, and whether or not you believe that is kind of besides the point. But they have this reputation that out there of we're trying to do this in a way that is safe, that that respects humanity and is paying attention to all of these things. And so when you have that clash, I think that's where the the struggle comes in. You have a Trump administration that just wants to be able to do whatever it wants to be able to do, and they're not subtle about that. And you have an anthropic that that you know, its self-description and its public persona is always like we're thoughtful and we respect humanity and rights and all of these kinds of things. I think that's probably where the clash came in because anthropic, you know, as has been made clear, has worked with the defense department for a while and has many other contracts with the government that it has used. It hasn't been a problem. It was only in these specific areas where, you know, as the government was seeking to expand expand the contract that it had, that, you know, the senior leadership of Anthropic began to say, well wait, we have to make sure that we're not crossing these red lines that would potentially harm our reputation as the sort of thoughtful safe AI provid er . We have to take another quick break. We'll be back in just a minute . Immerse yourself in Herbal Essence's new Moroccan Argan Oil Elixir, infused with pure argan oil. Just one drop. Delivers up to 100 hours of hair nourishment with the indulgent scent of a Moroccan garden. Herbal Essence's new Moroccan Argonoil Elixir. Spar quality Essences surface repair to smoothness, nourishment with regimen use versus non-conditioning shampoo. Need anything from Tesco? Like Nescafe Azera 90 grams instant coff ee for just £3.50 this Easter with your Tesco Club c ard. Because every little hel ps. Majority of larger stores AZRA 90 gramss end 14th of April. Club card or app requi red. We're back with Tech Dirts Mike Mastic. For the break, Mike was explaining how anthropics' reputation is the safe, conscious AI company, is being put at risk by the Trump administration's blunt demand that the tech industry essentially shut up and stop asking questions about how its technology will be used and to what end. Now I want to get into an important detail of the story, which relates to where our data comes from and where it lives. Because the Fourth Amendment prohibits unlawful SERPS and seizure when it comes to data that we own and control. But in the era of cloud computing and cloud services, the vast majority of our data isn't kept on our devices or in our homes, but instead on corporate servers run by huge companies. And the government can, in a lot of cases, go and obtain it without a warr ant. I want to briefly ask you about surveillance in general. And in particular, Anthropic's Fourth Amendment concern, right? The Fourth Amendment says the government can't unreasonably search you. Aaron Powell The best way to understand the Fourth Amendment is by listening to 99 Problems by Jay-Z. So if you need to take a break, you can go listen to it tonight and problemss. That's great. It' all in there. I listened to it when I was in law school and made perfect sense. But the government generally needs a warrant to like search you. And as more and more of your life goes online, there's lots and lots of exceptions to this. But the idea is they should still need a warrant online. Anthropic's argument is well the AI will never get tired. It can search everything all the time. That means we're just gonna do mass surveillance. But even before AI show ed up, right, the idea that the government could search everything that belonged to you was out there. The idea that the government didn't need a warrant to search all of your stuff was out there. The idea that if any of your data ever went outside the country for a brief second, the government could intercept it there was out there. When I was in college around the time of the Patriot Act, the debate was they're not going to search your actual data, but they can get the metadata. And the metadata alone, the data about your data, will be enough to precisely locate you at all times. And that even that is too far. And we we've been doing this dance of what can the government collect, what is permissible, what's what do they need to keep us all safe, and what's too far. Those lines have moved. So just briefly describe the the sort of generalized concern about surveillance at the scale in where we are now before the AI situation sort of exponentially made everything more complicated. Yeah. And here I have to introduce another concept that that probably should have mentioned earlier, but it is important, which is called the third party doctrine. And so the idea with the Fourth Amendment is that the government can't search you or your things without a warrant and it can't get a warrant without probable cause that you've committed some sort of crime. But there's this concept which came about decades ago called the third party doctrine, which says that doesn't necessarily apply, or doesn't apply at all, to things that aren't yours, even if it is your data. And so the the earliest and most obvious version of this was phone records that the phone company had of who who you called. The phone companies weren't recording your calls, but they were recording, you know, if I called you, there would be a record at the phone company that says Mike calls Neele. What had been determined by multiple courts was that the the government can go and request and they don't need a warrant for that because it's not a search of your data. It's this third party and they can agree as a third party to just hand over that that data. But that was like, you know, cases from the sixties and seventies where that was determined, that the government can get access to that without a warrant, when there wasn't that much third-party data out there. The rise of computers and the internet changed that. Now, everything is third-party data. Everything that we do is collected by some company somewhere and has a record of it. And so basically, every bit of data about you, where you are, who you speak to, who you interact with, what you say, where what you're doing. All of that is pretty much held by third parties these days. And so the the third party doctrine has sort of swallowed the entire fourth amendment to some extent. Where any anything that is about you ha there that somebody else has, there's a much lower standard for what the government can do to request it. Now companies can't just to be specific, this means when my data is in iCloud, the government can go to Apple and get my data out of iCloud without ever telling me. Well it it's it's they can request it. So they can easily request it without a warrant. Then the company has its own rights and can determine what they want to do with that request. They can just give it up. They can, as most of them will do, is um, you know, if if it's a serious request, they can reject requests out of hand or they can alert you and they can say, and this is what most of them will do. The'yll alert you and say, you know, the government is requesting some of your data. You know, you can go to court and try and block them. If not, we will hand over your data in seven days or whatever it might be. Again, it depends. If it's a criminal investigation, then there may be some sort of gag order where the company is not allowed to tell you. There's all sorts of situations, but most of them involve less than uh the level of protection that the fourth amendment would require if it was data in your or any information or anything in your own home. I'm asking this because it you know it's obvious to everybody listening to this. The amount of data you have on someone else's cloud server is massive. Every single thing that you do generally on the internet now is backed up in some way or recorded in some way on someone else's servers. And the government has found this way to get around the fourth amendment and say, well, that's not actually yours. It belongs to Amazon or whatever. And we can go talk to Amazon. And Amazon has to stand in the middle of that process and say, we've invented another process to somewhat protect the people. And I look at that, and you know, the when I was covering the first third-party doctrine cases that covered the cloud services and the government kept winning. I mean, that's basically when I turned into the Joker. And I was like, all of this stuff that we're pretending about textualism and the plain me-like none of this means anything. Yeah. Because we've we've just it horsepower to backdoor using this ancient law into everyone's data. And then I look at this and I I look at anthropic and I say, well this is the same pattern. Right? This is a private company saying, okay, like we understand your position. We understand that you've reinterpreted the law to mean this thing, and we're gonna put some process in between you, our tool, and the data of Americans flowing through our service. And I'm just wondering if you see that parallel there between Anthropic and Amazon and Azure and whatever other cloud services that exist that hold so much of our data. Yeah. Though I think there are a few clarifications that are important here that make this a little bit different. And in fact, you know, it was reported, I think the New York Times had this reporting first that, you know, the main clause that was most important to anthropic was specifically about data collected from commercial services and not being able to use Claude on that data, which is uh you know exactly this issue in terms of third-party data. Um but I do want to clarify the the main difference where what we were just talking about before this with with you know Amazon or other third parties um hosting your data, that was cases where they were, you know, because of where they sit in the ecosystem, they were hosting your data directly. With Claude, it's not that any anyone is worried about the NSA looking through like your Claude usage, right? It's it's about them going out and getting third-party data from an Amazon or more likely, you know, the the sort of sneaky hidden data brokers that serve ads on your phones and know all your location and your interests and things like that. And then feeding that into a system that Claude would then work on. That's what anthropic really didn't want to be a part of. So w wherever or however the government would collect that data from a third party, Claude said, we don't we don't want our tool to be used on that data. There's a piece of that that just feels like Apple famously stands up to the FBI, put a backdoor in the iPhone, and Apple says no, and they stand up to Trump. And there's just a part of however our system works in which big private companies get to say no to the government on behalf of their customers, and this felt the same. In the same way that you know Apple again won't put a backdoor on the iPhone, or the big cloud providers say there's a little bit of a process you have to jump through before you get the individual data. Here it seems like Anthropic is saying we're not just gonna do bulk analysis of data that you have acquired from other parties, because that leads to 24-7 mass surveillance of Americans, and we don't wanna do that. And that seems like a bridge too far for this administration. Is there any coming back from that? I mean, we'll see. In the past when that's happened, and it's happened plenty of times with with most of the large tech companies. At some point they've said something is a bridge too far. And where that normally goes is to court. And that, you know, the the companies will go to court or the administration will go to court and there'll be some sort of court battle. I mean, you know, backdooring the iPhone is is a perfect example of that. Sort of went to court and they sort of f fought it out, though they never quite got to a conclusion because the FBI eventually did just, you know, manually break into the iPhone and then didn't want the court ruling to to ruin the future. But in this case, you know, where the escalation is and where this is this is different than those past situations is that rather than just going to court, they did this supply chain risk designation, which is just insane. You know, this idea that this tool which was designed to stop, you know fore potential foreign malicious actors from supplying technology that could then you know put in hidden surveillance tools into the larger technology stack that those could be banned to apply that to a US-based company basically for having an ethics policy feels you know like a real real uh misuse of that tool, and even that tool, I think, was questionable in in some ways, but you could understand the impetus behind it for when you're talking about like a Chinese networking firm or something along those lines. Here it makes no sense. And so the the reaction to this goes so far beyond what would normally be seen in this case. You could see, you know, traditionally there would be some sort of court case and either side could start it. And it would just be a battle about, you know, what what you know how the contract could be applied. Um, but that's not what is happening here. That that this administration is effectively saying if you don't give us absolutely everything that we want, if you don't set up your tools to work the way we want them to work, then we will effectively try to destroy your entire business. And that that is an escalation. The Galaxy Brain version of all this. FIRE, which is a free speech advocacy group, uh they put out a blog post just to just before I started recording, making the argument that forcing Anthropic to build tools it doesn't want to build is a free speech violation, that it's something called compelled speech. And there's a there's a lot of history here. This is some some deep verge and tech dirt, like in the weeds, existential crisis history here. But it basically comes on the idea that code is speech. Writing code for a computer is a form of speech, and the government can't force you to do it. And a whole bunch of stuff flows from that. Do you buy this argument that forcing anthropic to build tools it doesn't want to build is compelled to speech? Yeah, I actually think it's it is fairly compelling. Uh compelling compelled speech. But um no, I I do think it is an interesting argument. It's one that, you know, had been a little bit further down the list of of the issues that I was thinking about. I was obviously mostly more focused on the fourth amendment issues and that. But this I I think the fire argument is not wrong. Um and we've seen this in other contexts uh in terms of you know, it did come up in the backdoor issue as well, in terms of you know trying to build backdoors into encrypted systems. Um companies definitely raised First Amendment claims and saying you can't that is compelled speech to force us to write that kind of code. And so I think it is a valid argument. It might be again one that courts are probably less willing to tackle initially if they can deal with these issues some other way. But uh I'm glad that FIRE made that post and I think I think it is an an interesting and and compelling argument. It's just the nature of the second Trump administration is that it's such a blunt instrument. It's like Yes. rights amendment has to be uh has to be challenged in some form or another with every every possible issue. I'm sure we can fit a third amendment uh violation somewhere in here as well. Claude has to live in your house. It's gonna be great. We're we're doing one, three, four, and seven. Let's let's rack them up. Mike, this has been great. I cannot believe you haven't been on a show before. This has been great. You gotta come back soon. Absolutely. Whenever you want me. Before we wrap up, really quickly, I want to bring on a very special guest. It's Helen Havlick, our publisher here at The Verge. Hey Helen. Thanks for having me on. Uh Helen, I wanted to ask, because you run our business, there's a lot of talk about AI. We just did a whole episode about it. How are we investing in that coverage? How are we growing that coverage here at The Verge? You and I obviously spend a lot of time talking about The Verge's AI coverage, how The Verge needs to cover the story. It's both the biggest story as a standalone beat for The Verge. We've hired some amazing people like Hayden Field who are covering it as a standalone beat, but it's also the biggest story in kind of every desk of the verge, right? Our policy desk is pointed dead ahead at the story. And we have a lot of fantastic people there and Tina Nguyen who we've hired and Sarah Zhong, who's doing amazing reporting. And then lastly in our product coverage, you know, I think as you and I talk to each other, and then as I go into market to talk to partners and advertisers about the Verge, I think what makes our coverage really unique is that is rooted in the product coverage. So what our senior reviewers have to say about the actual products these companies make, how they are to use, what the experience of using product is as a consumer, because I don't think you can tell the business story of AI and the policy story of AI without being deep in the product story of AI. And I think that is what makes it a perfect Verge story for this moment. And what I think also makes the Verge the uniquely best place to cover this story in this Aaron Powell It is true that not enough AI coverage uh features the question. Does this work? And the Verge relentlessly asks that question. Does this work and will people pay for it? We often describe Helen as our firewall here at The Verge. She runs the commercial side of our business. I run the newsroom. Helen runs the advertising business or subscriptions business. And there's a wall between that. Our our business doesn't affect what we make here in the newsroom. And it's Helen and I that protect that dynamic. Helen is the firewall. Helen, help people understand that separation and what individual listeners and readers of The Verge can do to support us. One of the things that makes the Verge, I think, really unique and respected is how strong our ethics policy is where our coverage is definitely not for sale. But we do still have a huge advertising business. That's a big part of what supports the Verge. And so a big part of my job is to work with our partners, our advertisers, our sales team to help them understand the verge, why they should advertise with us, what opportunities we can develop. You know, we've made new ad products like QuickPost ads. They're clearly disclosed. We want to make good advertising that people like because that also makes ads perform better. And so in running that business, I'm kind of on that side of the house. We also, for the last year, I think, as many people here know, have a really big subscriptions business. And that's, you know, the single most important thing any individual person can do to support our coverage in this moment is subscribe to the Verge. Benefits you get with the Verge subscription, you obviously get unlimited access to read the Verge. We now have an ad-free version of this podcast feed decoder where you can subscribe and get the feed for free. And then if you are interested in the story of AI and policy and some of the themes of this episode, your subscription also gets you a bunch of newsletters you should really enjoy. Tina Nguyen has a fantastic uh newsletter about MAGA tech policy and how all of this is unfolding called regulator. If you're interested in the product side of it, I'd really recommend Victoria Song's newsletter optimizer about the crazy like AI intersection with wellness and how we all live. David Pierce's weekend reinstaller is always one of my favorite. Like, what little things should I play with that are new and interesting and there's a lot of product uh playing out there right now a verge subscription annually and because i'm the publisher and firewall i will shill shamelessly a verge subscription right now starts at forty dollars for the first year. It is incredibly good value and it also supports all of this work we need to do and helps us invest in hiring more reporters to cover the story. I will tell you, uh I found out this week some of the biggest executives at some of the biggest companies in the world, read installer the same way you do every single weekend. It's pretty pretty good stuff. One thing you and I have been talking about is the shift in the advertising market. Decoder listeners know this well. Influencer marketing is real, the idea that the ads should be integrated with the content is everywhere. You've got some ideas for how we can do that and preserve our ethics policy and keep that separation alive. We're going to hear some of those ideas from you on the show in a couple weeks. Give the listeners a preview. Yes, I'm super excited to be piloting a new ad product for this podcast specifically. It's called Decoder Sessions. A joke we have that is also deeply true is that disclosure is our brand. And so I think the way the Verge handles this is to make really good products that do a good job for our advertising partners, but that also ideally make a much better product for you, our listeners and viewers, while also disclosing it and separating it to the standards of our ethics policy. So, one thing we are going to try on this show where the thing you cannot do is pay us to have your executive talk to Neli. But this is a show of interviews. This is a show of interesting conversations about business and organizational structure and technology. So we are going to take that format, which is to interview interesting people from companies about their businesses and their organization. And we are going to make a new kind of ad break for you that hopefully you really enjoy and that is better than kind of a standard podcast ad that's more fun, more engaging, more native. I will host it. Um, so I'm going to be talking to different executives from some of our partners. It will be separate from what Neli's doing. Neely will not get a heads up on any of it. But I'm really excited about it. What we want to make is a better product for you and a better product for our partners that's still kind of holds the high standards of the Verge. So we have our first one coming up in a few weeks. I think it should be really fun. I'm talking to an executive from L'Oreal group. It is going to be a really fun time. I'm excited for this product because I want to go compete for where I think the advertiser dollars are going, which again, every decoder listener knows, I think, is headed towards the creator economy in ways that collide with journalism. So this is our approach to solving it. That one's going to come out. You're going to hear Helen do that interview. We're very interested in your feedback and how we can make that stuff better. I think the ads should be good. I think they should be fun to listen to, just like the show is. I just think someone else should make them. So Helen, I'm excited for your first decoder session. Thank you so much for joining me quickly at the end here. Thank you, D-L

This excerpt was generated by Pod-telligence

Listen to Decoder with Nilay Patel in Podtastic

Podcast Listening Magic

All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.