HA

Hard Fork

The New York Times

HatGPT News Round Up

From The Future of Addictive Design + Going Deep at DeepMind + HatGPTApr 3, 2026

Excerpt from Hard Fork

The Future of Addictive Design + Going Deep at DeepMind + HatGPTApr 3, 2026 — starts at 0:00

Dell PCs with Intel inside are built for the moments you plan and the ones you don't. They're for those all night study sessions. The moment you're working from a cafe and realize every outlet's taken, the times you're deep in your flow and can't be interrupted by an auto update. That's why Dell builds tech that adapts to you. Built with long-lasting batteries so you're not scrambling for an outlet. And built in intelligence that makes updates around your schedule, not in the middle of it. Find technology built for the way you work. Built for you . Now here was a really interesting situation, Kevin. Did you see uh this robotaxi outage that left passengers stranded on highways in China. No. So this happened in Wuhan uh recently. I've heard of that place before. Did they do anything else? Um not clear to me. I'm not really familiar with their game . But uh apparently there was some sort of technical glitch that caused a number of robotaxis owned by the Chinese tech giant Baidu to freeze, trapping some passengers in their vehicles for more than an hour . And I just thought, my gosh, what a nightmare. Just imagine you're in your robotaxi on the way to a wet market in Wuhan . You have an appointment with a pangolin who's gonna cough on you to see if they can transmit anything to you, and then your robo taxi gets horrible. It's a nightmare. It's an absolute nightmare. I think that robotaxi outage is definitely the worst thing that's ever come out of Wuhan. Yeah. What w when it comes to these buy do robo taxis, my advice?y Bu don't. No boy. No, that was the worst thing to come out of . I'm Kevin Roos, a tech columnist of the New York Times. I'm Casey Noon from Platformer. And this is Hard Fork. This week, social media companies keep losing in court. How will that reshape the internet ? Then, the Infinity Machine author Sebastian Maliby joins us to discuss his new book on Google Deep Mind, Ademis Asabis's quest to build super intelligence. Finally, it's been a while. Let's catch up with some hat GPT. I missed you. Me too. Well, Kevin, while we were away, I was riveted by what was going on in the courtrooms in Los Angeles and New Mexico related to social media. Yeah, it has been a big week for these social media product liability trials that have been going on now for some months. And we actually got some verdicts. We did, and in both cases, social media lost in LA. A jury found that Meta and YouTube had been negligent in the way that they designed features that they said were harmful to this plaintiff. They have to pay six million dollars combined to this plaintiff. And then in New Mexico, the jury said we believe that Meta has violated the state's Unfair Practices Act and has misled consumers about the safety of its products and has endangered children. In that case, they are ordering Meta to pay $3 75 million . Yeah, so we've talked a little bit about this uh series of cases against the social media companies. You know, social media companies, they get sued all the time for all manner of different things. I think what caught our eye, and specifically your eye, was the sort of legal theory underlying these cases. So talk a little bit about that and what makes this case different from other cases that have been brought against the social media companies. Yeah, so I would say there are kind of two big reasons why these cases are super important. One is that uh these are what are called bellwether cases. Kevin, you ever heard of a bellwether case? These are like cases that set precedent for other cases, yeah? Exactly. These are the cases that, if successful, are gonna open the floodgates for lots of other people to sue under the same theory. The second big reason that these cases are really important is that they appear to have opened up a crack in section two thirty of our communications decency act here, which for 30 years has been essentially the foundation that the entire internet rests on. It's also a dentist's favorite uh statute. Yes, uh that's section tooth hurdy, if uh the joke wasn't landing for you. So yes, this is a super important super important. I'm glad you got that. No, the really sad part was I was planning my own Section 230 joke. Oh wow, because I just went to the dunis yesterday. And now I didn't have any cavities. So tooth not hurtdy. Moving on. So Section 230, Kevin, you may remember, is the law that says that in most cases these platforms cannot be held liable for what their users post. Yes. So if I went on Facebook and I defamed you, which is something I think about doing every day, you could sue me, but you couldn't sue Facebook. This is what's been blocking my lawsuits against Facebook over your posts for years. That's right. And back in the day, like 30 years ago, this was actually really important because there were these small internet forums that were starting up. Some of them got to be bigger size, you know, CompuServe, AOL. And inevitably, somebody would be mean to another user and they would say, I'm not just suing you. I'm suing CompuServe. I'm suing AOL. I'm putting the whole system on trial. And a couple of lawmakers got together and they said, This is gonna destroy the entire internet. Like we need for there to be forums and not have these platforms being held liable for all these things. But fast forward to today, and Kevin, would you agree that maybe there are some harms uh that are taking place on the internet that uh do not consist entirely of people defaming one another on copy surf. Yes. Yeah. And so this is essentially the question that gets asked in this case, right? People say, hey, it seems like we're a pretty long way away from 1996. I'm opening up TikTok. I'm opening up Snapchat and I'm seeing infinite scrolling feeds. I'm seeing autoplaying videos. I'm a teenager, but I'm getting barraged by push notifications in the middle of the night. And that's to say nothing of the recommendation algorithms that might be driving me toward content related to eating disorders or other things that are going to make me sad and upset. And so some of these people get together with their attorneys and they say, this actually feels different from the thing that section 230 was designed to protect, right? This is not about uh oh, I got harmed by this particular piece of content. This is about the design of the whole platform. The design feels defective. And the really crazy thing about these cases, Kevin, is that juries agreed with these plaintiffs for the first time and they said, We like this theory, we think these products are defective. Right. So this is kind of a a side door that these lawyers have found around litigating on section 230, which they have successfully now shown that at least in these cases can convince a jury that it is not about what's on the social network content-wise, it's about the actual like sort of mechanics and plumbing of the social network that are harmful to people. That's right. And we should say that we do expect some appeals here. And until those are you know sort of fully exhausted, I can't tell you for certain this is the moment that the internet changed forever. But there's been a lot of commentary over the last weekend about what it would mean if these cases were upheld, because it seems like juries are just going to be really, really sympathetic to these claims. So before we get into the implications, like can I just ask a couple more questions about these actual specific cases? Please. So what are the actual platform mechanics that are being litigated over here. Yes. So in the LA case, among the design features that were at issue were the so-called beauty filters that can make you, you know, look uh quote unquote more beautiful if you use them, infinite scroll, auto play video, um, these barrages of push notifications that platform sends, and also, I would argue more problematically, the recommendation algorithms that power the platform. And then in the New Mexico case , that was much more about kind of uh child safety. So they were arguing that Instagram in particular had become this playground for predators. It was very critical of the fact that meta offers uh end-to-end encrypted messaging and the basic idea was meta falsely advertised that these platforms were safe when in reality children are being harmed there all the time. So from what I understand it was like the case was basically uh taken out of the playbook for going against big tobacco or another sort of industry that makes harmful products. You say this is harmful, and not only is it harmful, but the company that was making it knew that it was harmful and either made it more harmful or just released it as planned anyway. Uh I did see some sort of exhibits that had been shown off at the LA trial, I believe, where some employees at Meta were sort of talking on their internal forums about how this stuff is so addictive for kids. Um, that seems bad. Um, and I imagine that was persuasive with the jury, but are there other instances where the platforms are being sort of taken to court over things that they sort of knew were harming people and that they either dialed up the harm in an attempt to spike engagement or sort of knowingly release these things to the public? Yeah. So I mean some of this research has come up in other litigation over the years, but I think this has been probably the most damaging case that we have seen. You know, the first time I remember reading a lot of these internal studies was in the wake of the Francis Haugen revelations a few years back, right? Like Francis Haugen walks out the door of Meta and takes a bunch of this internal research with her, winds up sharing it with the Wall Street Journal, and then eventually a bunch of other reporters, including me. The reason that the research mattered a lot here though, Kevin, was again, the plaintiffs are now building this very specific case, which is you're building a defective product, right? Before the past couple of years, we weren't really using this language. We weren't really adopting this sort of public health framing of a way to discuss the harms of social media. Before then it was just kind of this more nebulous like, hmm, like they're studying the effect of Instagram on teen girls and it seems like some of these girls are having really bad outcomes, but we didn't really have the framing. Well, now we have the framing and we're just saying like, hey, you looked into it, you found that some subset of your users are having really bad experiences and you did not change the features. And so that mattered. Well let's talk about the changes. So what what would you expect a platform like Instagram or Facebook or YouTube to change in the wake of these uh jury verdicts or are they just gonna wait till it all shakes out on appeal? I honestly don't know the answer to that question. And I think it's a really interesting thing to watch . The question that you just asked is really, really controversial, actually, because much of what these platforms do is just protected under the first amendment. And then section 230 also protects a lot of speech, right? And the big debate that's like raging in the internet policy community right now is can you separate design from content? I want to get your thoughts about this. Right. Is it like the container or is it the stuff in the container that is dangerous? Yeah. And there are some people who are saying that no, you cannot make that distinction. And that effectively all design is content, right? Like if I want to send you a push notification, that is my right under the first amendment. And you cannot tell me that I cannot do that. You cannot tell me that there is a certain limit that I have to place on the depth that you can scroll in Instagram. Like that is protected. But for what it's worth, juries are taking the opposite view. Yeah. They're saying that there are at least some things which seem like are just clear mechanical design features. And I happen to agree with them. So let's talk about this because I think this is maybe a place where you and I disagree, or at least where I have some misgivings about this theory. So in the case of something like cigarettes, which is a very heavily litigated field that I think a lot of this social media litigation has been modeled after. There's like an addictive ingredient, right? Nicotine. Everything that you put nicotine in becomes more addictive as a result of having nicotine in it. You know, this happens with cigarettes, it happens with vapes, it happens with you know nicotine pouches. If you started putting nicotine in ice cream, ice cream sales would go up because nicotine is very addictive. I think the question I have about the mechanical addic tiveness of these sort of features like infinite scroll, like autoplay recommendations, is that if it followed the same principle as nicotine, then every product that has those would become way more popular. And one example I've been thinking about on this is Sora . They they sort of took the playbook that was working for TikTok and Instagram and they put it onto a new app, and the app did not succeed. Right? There are other apps that have tried to mimic things like the newsfeed, that have tried to mimic things like uh autop lay video or recommendation algorithms that have not taken off. And so I guess the question in my mind is like: if the litigation over social media is modeled after the litigation over big tobacco, shouldn't there be like some industry-wide lift as a result of every platform trying to borrow the most addictive features of Facebook and Instagram and YouTube. I mean, I hear what you're saying, and I think it's an interesting point, but I think that internet platforms just work differently than cigarettes, right? Like, cause you're right. Like with nicotine, like nicotine is just addictive. Now, there are people that smoke cigarettes without getting addicted to them, right? But probably the majority of people do. Social media platforms are an imperfect analog to those cigarettes. I believe that platforms need to be of a certain scale in order for them to be truly addictive in the way that these plaintiffs are now suing about, right? There's something about the fact that there's hundreds of millions of people on Instagram and on TikTok creating content that creates that kind of infinite supply of things that you might potentially want to watch that is actually able to But now you're talking about the stuff in the container, right? I well, I think that there are many ingredients that all work together, right? But but the but you you're raising a criticism that people are making of this lawsuit. Like uh effectively what I hear you saying is you cannot distinguish between the content and the I'm I'm sure I mean I think I I'm open to the to being persuaded that you can, but to my mind, it's like one lesson that you could take from this is that it is very bad to be a popular platform that engages these mechanics to keep users coming back. But it's okay to be an obscure platform that does it because that's not going to have as much harm. So what's really sort of at issue here is the fact that these platforms are very, very good and very, very popular at doing the thing that everyone else is trying to copy. Yes. And this is the approach that Europe has taken to regulating these platforms, right? They have certain like categories. And if you are a very large online platform, then you just have more responsibility. That makes intuitive sense to me. I think the bigger and richer and more powerful you are, the more responsibility that you have to society, right? And so in this particular case, you have companies like Meta, which we know are hiring cognitive scientists who are working very hard to figure out all the different ways that they can hack your brain to get you to look at Instagram for as long as they possibly can. It is in their interest to get you to look at Instagram as long as they possibly can. And right now, there's just no break on that at all in our society, except for this litigation. So I'm so sympathetic to these jokes that are looking around. They're seeing this comp you know almost completely unregulated platform and they're saying something's gotta be done. Yeah. So regardless of sort of what our thoughts on the overall sort of legal theory here are, like what do you think the effects are on the platforms? If this does get held up on appeal, if these platforms are found liable for millions or potentially billions of dollars in damages against all of these people who claim that they were harmed by social media. Does that mean that they have to, I don't know, go back to like the reverse chronological feed of two thousand eight? Does that mean they have to shut off uh, you know, infinite scroll and autoplay and recommendations and all these other things? This is where it gets really tricky. And this is like maybe the one narrow way in which I'm sympathetic to the platforms, which is: okay, the juries have said your product is defective. What juries have not said is: here's what an okay product looks like, right? They're saying we don't like this sort of set of features, but they're not saying with any specificity, like, well, how do we think that these features are interacting? Right? Like, what is your actual model of the harm here? And so there is a world where the platforms feel like they have to comply and they maybe start picking off some of these features one by one, like, okay, if you're like under 16, we'll disable infinite scroll, for example. How much benefit does that really have to like the individual teenager who may be struggling? I don't know. This, of course, is why it would be great if Congress could pass some sort of law regulating this, but you know, we're now like uh, I don't know, a a decade into that project and still not getting very far. Yeah. I mean, I think one prediction about how this will change platforms and their behavior is that if you start talking about gambling or addictiveness on an internal meta chat room, uh you just immediately get fired. There's just like a little button on your seat that just presses and you get ejected out of the building. Yes. It's like cause so much of the the incriminating evidence here just comes from people like spouting off in work chat rooms about like, oh it really seems like this thing we're doing is dangerous and like I have to imagine that if it hasn't happened already, they are just gonna absolutely crack down on that kind of internal discussion. Absolutely. Well so I wanna hear a little bit more about how you think about this because you have talked on this show many times about your own struggles to look at your phone less. This is an issue that you know at various times you feel like has plagued you. So, how are you feeling about the addictiveness of these platforms? Like, do you buy the sort of public health framing for the way that people are talking about them these days? Or do you think that this is overreach? So I need to do some more thinking about the product harm arguments here and whether it makes sense to me. I I am basically on board with the idea that there should be age gating for social media. I am sold on the premise that there is a certain age, whether it's 16 or 18 or 14, where sort of the most har mful effects taper off. And I think before that age, it makes total sense to age gate or at least give parents a lot more control over what their kids are able to do and not on these platforms. I think the addictiveness question is just hard for me because I feel like my my sort of macro theory on all this stuff is that what is happening to social media over time is that the social part is fading away and the media part is rising in the mix. And so I think that if you start treating the design and mechanical decisions of these media platforms as uh harmful under the law, it just sort of leads me into a place where I become much less certain. Like before any of this existed, there were cliffhangers on TV shows that were designed to keep you coming back after the commercial break or to the next week's episode or whatever. Those were arguably addictive features. They would keep people coming back. Is that illegal? I would say probably it shouldn't be , and it's not. So I think there is a certain sense in which the closer this that social media moves to something like TV or streaming video, the blurrier the lines in my mind get between the content and the mechanics. What are your thoughts on that? Well, I have to disagree. I do think cliffhangers should be illegal. Because I want to know what happened. I don't want to have to wait till the fall to find out, you know, if that person is still alive. But also I do think that there are some really important differences between like, let's say, YouTube and HBO Max, right? Like HBO Max is not like gonna modify the content of HBO to your individual preferences, right? Like they're gonna go pay some money for a bunch of shows and they're gonna hope a bunch of people watch them. The the platforms that we're talking about are doing something very different, right? They're looking across the entire corpus of like every video that's ever been uploaded to their platform, and they're trying to figure out what will keep you personally here the longest and we're going to show you that as much as we can. So I just do think that there's a kind of categorical difference here. And while I do think people should have broad freedom to you know look at whatever they want, I do think that at a minimum we should probably place an age gate on it for the same reason that we don't let fourteen year olds walk into bars. Right. Unless they're really cool and have a fake ID. So talk about the encryption piece, because you had a line about this in your newsletter that I didn't quite understand. But what what is the encryption debate that's part of these lawsuits? Yeah. So, you know, here I I understand that I'm coming across as being broadly supportive of these jury verdicts, which I am, but I do want to acknowledge like this could lead to some really bad places, like, and this is why we need to to handle Section 230 with care. In the New Mexico case, the attorney general argues that a reason that Meta should be considered liable uh in in in pr in advertising their platform as being safe for children is that it includes encrypted messaging, right? In fact, Meta in March announced that they would discontinue encrypted messaging on Instagram in what I believe was an effort to sort of uh get ahead of this. Uh, what they said was look, if you want to do use encrypted messaging, you can use WhatsApp instead. But to me, this would be like a just a legitimately horrible outcome of all of this. It is if like every company that now offers encrypted mess aging either voluntarily decided to stop offering it or was pressured by the government to stop offering it because in my view, encryption is a necessary part of privacy in a world where people are mostly communicating online. Right. Are you comfortable with all of this happening in the courts through jury verdicts? This is not my preferred way of addressing this, but I think it was inevitable in part because the tech companies have been so obstinate about making meaningful changes to their platforms, right? Like societies across the world have been begging these companies for a decade, please do something to make these platforms safer and to make them less addictive and to reduce some of the harms. And instead, what we've mostly seen is a series of engagement hacks designed to get people to look at them longer, right? And in the United States, where you cannot regul ate the content of any of these apps for the most part, you can really you're really only left with the design, right? You're really only left with just the raw mechanics of the app. So if the social media platforms are upset about the verdict here, I truly believe they brought this on themselves. I mean you you asked me about my own experience of screen addiction, and I've never been sort of a a total screen addict, but I've struggled. Like I think you know many, many other people have with like how much I'm using my phone, how much I'm using various apps. I have come up with convoluted ways of trying to reduce my screen time. Once we're six hours late to a hard fork taping because you wanted to find out what happened to Chimpanini Bananzini on TikTok. I thought we agreed to keep that private. But like never in all my struggles with screen time have I thought to sue the companies uh that were making the apps that went on my phone. And I guess it's different when you're talking about kids, but like there is some part of me that just feels like, well, it just feels like an easy way out. You know, blame the platforms. And look, I think these platforms absolutely have culpability here. I am not saying that I disagree with these jury verdicts. I think that these platforms, especially Meta, have done the research, have found the harms, and then have shielded them from the public . But I just I guess I'm I'm thinking about my own experience of these addictive platforms being one of like feeling bad about myself ra rather than trying to uh you know find someone else to blame. Yes, but you also had the benefit of beginning to use these platforms when you were already an adult, right? Like your hippo campus was formed. And I think I was on Instant messenger from a very early age. Do you really think that like messaging apps are like as addictive and harmful in the same way as like TikTok or Instagram messages? Oh my god, take me back to nineteen ninety-nine, put me on AOL Instant Messenger. I could not tear myself away from that thing. I had to put up a little message with uh you know get up kids lyrics on it every time I left the computer uh because it was such a rare event and I wanted my friends to know that I was away from keyboard. Okay, these things were addictive. The the kid got up. Uh it's a get up kids joke. Um yeah, I like look, I I just think that like messaging apps are different from like these these social platforms. And I think, you know, honestly, like I will be curious, you know, you know, who knows if Instagram and TikTok will be what they still are in like 10 years, maybe when your son is uh ready or wants to use social media. But I just think that it it it probably just feels very different than when you're a parent. Yeah. Well, Casey, are there any new social media apps that you're addicted to? Um it's called Clawed. And um it's really wait I do want to talk about the AI of this all. So obviously every discussion on this show has to come back to AI at some point. So I'm curious like what effects you think this might have on some of these AI companies because they are also trying to create experiences that are engaging, addictive, whatever you want to call it. I can imagine some of these uh, you know, lawsuits that are being brought against the makers of chatbots for harms, like the it all feels like it's sort of can gonna con verge at some point. So what's your take on that? Yeah, so Pew did a study in twenty twenty five and found that sixty-four percent of teens now use AI chatbots. About three in ten use them daily. That same survey said that uh the teen use of youtube tick tock instagram and snapchat had remained relatively stable right so yes chatbot usage is growing it has not yet come at the expense of the social platforms. Although of course I expect that we'll s soon see chatbots inside all of those platforms, right? And like these things will all just kind of merge together. There's something about these things where they do kind of go hand in hand. And to your point, like I think that yes, AI chatbots will be the next frontier of this debate because in many ways they're much more engaging and I think like will be stickier than even these platforms are. Yeah. I mean, it just seems so obvious to me that, the platforms should be like absolutely begging Congress to regulate them because the alternative is like they just get sued into oblivion by a bunch of you know law firms. I I mean absolutely like if I were running one of the big AI labs, I would want to have an understanding from Congress of like what do you consider a ch a safe chat bot? Like give me a checklist that I can I can follow um because I don't want to have to be dealing with this in you know the next few years. Yeah. Casey, what's an addictive engagement mechanism we could use to get people to come back uh after the break . Well, we could study their behavior and weaponize it against them? Good idea. When we come back, Sebastian Malaby, author of the new book, The Infinity Machine, joins to talk about Demis Hasabas, Google Deep Mind, and the quest for superintelligence . Dell PCs with Intel inside are built for the moments you plan and the ones you don't. They're for those all-night study sessions. The moment you're working from a cafe and realize every outlet's taken, the times you're deep in your flow and can't be interrupted by an auto update. That's why Dell builds tech that adapts to you. Built with long-lasting batteries so you're not scrambling for an outlet. And built in intelligence that makes updates around your schedule, not in the middle of it. Find technology built for the way you work at Dell.com/slash Dell PCs. Built for you . Most all-in-one HR systems are a patchwork of disconnected and manual tools. Rippling is totally automated. If you promote an employee, Rippling can automatically handle necessary updates, from payroll taxes and provisioning new app permissions to assigning required manager training. That's why Rippling is the number one rated human capital management suite on G2, TrustRadius and Gartner. If you're ready to run the backbone of your business on one unified platform, head to Rippling.com slash HardFork and Framer is a website builder that turns dot coms from a formality into a tool for growth. Whether you want to launch a new site, test a few landing pages , or migrate your full dot com. Framer has programs for startups, scale-ups, and large enterprises to make going from idea to live site as easy and fast as possible. Learn how you can get more out of your dot com from a framework specialist or get started building for free today at Framer dot com Slash Hardfork for thirty percent off a Framer Pro annual plan. Rules and restrictions may apply. Well, Casey, if our listeners read one book about AI this year, it should be mine. But if they read two books , the second one should be Sebastian Malaby's new book, The Infinity Machine, Demisis Habas Deep Mind and the Quest for Superintelligence. Tell us about this book, Kevin. This book came out this week. It is full of a bunch of new anecdotes and stories about the work of Deep Mind and the motivations that drive its CEO, Demis Hesab as. Sebastian is a longtime journalist. He's a fellow at the Council on Foreign Relations. And he spent a long time with Demis and the people close to him and brought us this book about what I think is the AI Frontier Lab that gets the least coverage relative to its importance. Yeah, and look, I mean Demis Asabas is a singular figure. He's been on hard fork several times, but Sebastian went really, really deep and I think maybe gave us the most uh fully featured portrait of the man that we've had to date. And before we bring him in, because we're gonna talk about AI, let's make our disclosures. I work for the New York Times, which is suing open AI, Microsoft, and Perplexity. And my fiance works for Anthropic. Sebastian Melby, welcome to Hard Fork. Great to be with you. So, people who listen to our show are familiar with Demis Hasabas and Deep Mind. He's been on several times. What is something non-obvious about Demis that you learned through talking with him through many hours and interviewing many people who know him? I mean, I think maybe the spiritual underpinning for his scientific curiosity was interesting. You know, there was one time when we were sitting in this London park uh and talking for a couple of hours and he suddenly started saying you know when I'm up at two in the morning at my desk by myself thinking about science, thinking about computer science , I feel reality is screaming at me, staring me in the face, waiting for me to explain it. And he calls it the goddess Spinoza, that this is the 17th-century philosopher Spinoza who said that to understand nature is getting closer to God's creation. And that resonates with Demis. Maybe that's something people don't know. That's interesting. I mean, yeah, this has been something that's come up in my own research too, is that, you know, he grew up going to church uh I believe with his mother and I think unlike a lot of the other AI leaders has a way of sort of fusing the science of AI with his own spiritual beliefs. And I know some folks have seen his ambition and his many years of competing to build AGI and have seen something suspicious in that, right? Elon Musk has this whole theory about how Demis secretly wants to be an evil AI dictator who takes over the world. And I I guess I'm curious if if in any of your reporting with him you ever saw something that that seemed like Aaron Powell No, I mean to the contrary. I think this idea that Demis is a quote evil genius, which is the one that that's the phrase that Elon used to use, came from the fact that in his video game production days, Demis had created a game called Evil Genius. And so maybe it was a joke at first, but you know, really I got to know Demis extremely well. I spent more than 30 hours with him. You stress test people quite deeply, as you know, Kevin, when you're writing about them and then you might get pushback and legal threats and all that stuff. And he did make me talk to his lawyer once, and it wasn't totally easy the whole time. But he was reasonable in the end. Wait, why did he make you talk to his lawyer? Yeah. He was very mad at the fact that I unearthed the whole story about DeepMind trying to spin out of Google between 2016 and 2019. And you know, they retained a whole bunch of advisors, lawyers, bankers, etc. They got Reed Hoffman to pledge a billion dollars to finance the spin out. They went to see Joe Tsai in Hong Kong, the Alibaba co-founder. Anyway, so the lawyer was not amused that I had all these internal documents from inside DeepMind which had been leaked to me, the broad presentation that DeepMind gave to Google and so forth, and he said, You're not supposed to be writing about this and I said, Well, you know, people gave me this stuff and to ugh. So there were moments of free and frank discussion. I have always believed that when a source gives you secret documents, it helps you get closer to God's creation. So that's what I would have told him. I wanted to ask another question about uh childhood because Demis told you that he really identified with the boy genius protagonist of the novel Enders game. And of uh relating to this feeling of being socially isolated by his own talent and consumed by a desire to make his mark on the universe. And the reason it struck me is that in this novel, Ender believes that he's doing training exercises, but then what he thinks is like a test, essentially a video game, uh accidentally wipes out an alien species. So I wondered if you talked with him about like why he relates to that story and in particular if there's any relation to that and the idea of maybe trying to build a superintelligence. Well, um I was astonished. You know, this was before my first dinner with him. Um uh and it was still in kind of the vetting process. It was the last part of the vetting process where he agreed to give me the access I needed. And he said, you know, you've got to read this novel before you come and see me. And so I show up, I've read this story. It's about a diminutive boy genius who basically saves humanity from aliens. And I'm thinking, does he really see himself as saving humanity by doing what he's doing with AI. And even if he thinks that, why would be so he so crazy as to tell me? I mean surely that's hubristic beyond belief. Why would you put that out there? And you know he, made no secret about it. He said, Yeah, you know, I feel like I identify because this guy put all of his energy and his life into saving humanity, and I feel like I'm on a mission like that. And he said, I I felt so strongly about this. I gave it to my wife to read it, thinking that she would understand me better and sympathize with me. And you know what? She sympathized with the kid Ender, but not with me. That's not fair. Yeah I mean one one other character trait that comes up over and over again in reporting about Demis and and especially in your book is how competitive he is. This is a guy who loves to win. You know, he was a child chess prodigy and he won this thing called the pentamind, you know, five times, which is sort of like an all-around gaming competition. Do you think that is part of his approach to AI? I mean, he's always talking about how he wants to use this to solve scientific mysteries and cure diseases, but is some part of it just like this guy loves to win and this is a really big contest. Totally. I mean that's exactly right. I remember going to see him, you know, when ChatGPT was just going viral Yeah, you get it. You you bring up the release of uh ChatGPT, which happens in uh November twenty twenty two , and I'd love to hear a little bit more about how Demis had reacted to that because I think before that happened, Google really thought they were comfortably in the lead and did not seem to be feeling a lot of pressure to release anything. So I'm particularly interested if in hindsight, Demis has regrets about the fact that they sort of let uh Sam Altman beat them to the Trevor Burrus, Jr. I mean he has an explanation more than a regret. And the explanation is super interesting. It's basically that because he studied neuroscience for his PhD , and you've got to remember this is back in you know 2008, 2009. So nothing worked in AI. So you were starting from scratch. And one of the ideas in neuroscience is called action in perception. And this is the idea that to really be intelligent, you have to take action in the world. You don't know what it means for something to be heavy unless you pick it up. You don't know what gravity is unless you actually drop something. And so he had this idea when the Transformer paper came out in twenty seventeen and OpenAI was starting to do the first GPT in twenty eighteen, second one in twenty nineteen and so forth. You know, that's not going to work. It's not going to take you all the way to powerful intelligence because language is just a system of symbols. It's not grounded in the real world. And it's not that he was wrong in the sense that now we see world models come back in 2026 as a big area of excitement and research. But back in 2018, 2019 , he was missing the fact that a huge amount of knowledge about how the real world works is in fact in language if you download all the language on the internet. And he missed how much you could squeeze out of language as a training set. Yeah. I mean I I want to run a theory by you, Sebastian, for your your take, but as I've been working on my own book and about this sort of period at Google and at OpenAI and at Deep Mind , it strikes me that there are sort of like two visions of what intelligence is that these companies disagree on. And in one vision, it's like intelligence is about winning, it's about optimization. It's about a contest between rival intelligences. And that's very much like the deep mind sort of reinforcement learning paradigm, which is like alpha go and you know, you play a board game a bunch of times and you get better at it a little more every time. And then there's this other view, which is sort of the more open AI sort of language model scaling paradigm, which is like, no, it's about answering questions. Like being very smart is about having the right answer to everything. Does that theory hold water with you that there's something like psychological about these two approaches to AI development that actually are rooted in like what we think intelligence actually is? Yeah, I would say that the deep mind special source right from the beginning was to try to put those two things together. It's interesting, for example, that with Alpha Go , the early research on that, Ilya Satzkever contributor to it. And of course, he was, you know, the sort of leading practitioner of deep learning, went on to be OpenAI's chief scientist. But at the time he was working for Google because Google had acquired his boutique. And so the the reinforcement learning people in London working for Deep Mind collaborated with the deep learning people in Mountain View, and that's what produced the Alpha Gare breakthrough. So I think I think you're right, there are these two strands within AI of reinforcement learning, which I would describe as learning through experience, interaction with the real world through trial and error. And on the other hand, learning through data, and that is the deep learning . And for humans, you could think of it as being: you know, you can go to the library and read all the books, and that would be deep learning. You're learning from data, from sort of crystallized human knowledge, or you can go out there in the real world and learn about stuff by planting your garden and whatever. Yeah, you can be like Casey who's never read a book. Yeah. So we're sort of the two approaches here. Um you mentioned earlier this uh I don't know if it's fair to call it a plot. It sort of seems like a plot that they had at one point after they had gotten acquired by Google to try to spin themselves out. I believe they call this Project Mario. I would love to hear a little bit more about how that came about and why they didn't go through with it. So what happened was that when they sold uh DeepMind to Google in 2014. They had a rival offer from Facebook, and Facebook actually offered them more cash. And one of the reasons they said no was that they wanted safety protections around their technology. And so they had this deal, there was going to be a safety and ethics board. And Google promised that, and they went ahead and sold to Google. And they had a first meeting of the safety and ethics board in 2015 after the acquisition. And in order to like bind in the other people in the space, they got Elon Musk to host the whole safety and ethics board at SpaceX . They got Reed Hoffman to show up. And you will notice that then these are the characters who either found OpenAI or fund it in in those two. So Google wasn't best pleased , as you can imagine. I have to say that doesn't seem like a very ethical thing to do. Maybe not the people I would have put on my ethics board or these these characters. But it's a dichotomy, right? Dilemma. I mean you you know either you put people on the board who don't know what they're talking about and are not interested in AI, or they do know about AI, in which case they want to go and do their own thing because it's too exciting not to. And a fundamental mistake that Demis made in his early conceptualization of how AI would be developed was this notion that there would be one single lab producing AI on behalf of all humanity. And therefore it could be safe because there'd be no race dynamic. And you could take your time in sort of red-teaming the models before you release them. And that's why he brought Musk into the tent. That's why he brought Reed Hoffman into the tent, precisely because he thought we could all be one team together. And so then what happened after, to answer your question, Casey? So what happened after was that having lost that first experiment in setting up a safety and ethics oversight board, Google didn't want to do another one. And really, DeepMinds' project, Project Mario, was to try and force them to do more by threatening to walk out if they didn't Why did they call it Project Mario? Was that about the video game? Good question. I don't know the answer. Sorry. I failed to So how does Google get them to abandon this plan? You know, it's attrition. Uh Sunda Pichai , uh his personality and his management style comes out quite interestingly in this whole story because you know right at the beginning in 2015 when um you know, the first safety and ethics oversight board fails. Um the next idea that Demis has for how to get some independence and control of the technology is to become a bet, as in an alpha bet when they were spitting out Waymo and some of the other side bets they had. And Larry Page was cool with this, and he was CEO at the time. But then right as these discussions were going on, he handed over to Sundar. And Sundar kind of pretended to say, Oh yeah, absolutely, great idea, we should look into it. But really he was just spinning them along and had no intention whatsoever of letting demo spin out because he recognized him as the AI talent that Google was going to need in the future. And so essentially there was this long drawn out, you know, delays here and we should just look at some more details. And here's another term sheet. And I was given some of these termsheets. They're like huge, great documents with red lines all over them where you know one team of lawyers had come back to the other team of lawyers. And you know, basically by 2019, everybody was exhausted. It all fizzled out and they just moved on. There's been a lot of uh sort of jostling for independence within DeepMind uh ever since the earliest negotiations about selling to Google. Give us an update on how things are going with them now. Like, you know, when we talk to them, they present things as being, you know, fairly like hunky-dory between everyone, but are there still kind of tensions and fault lines between Google and DeepMind? Well, you know, uh I'll give you sort of what I would regard as somewhere between probably true and unconfirmed rumor. Is that all right? Can I am allowed to do that? Oh please please we love we love to gossip on this route spill the tea yeah so I I'd say that you know Sergey Brynn is the troublemaker here that he uh one of the Google IOs' , I guess it was a couple of years ago, the stage was set up for two people to be on it. There was the interviewer and there was Demis. And suddenly Sergei kind of runs onto the stage. They have to get a third chair. And then he kind of inserts himself into that conversation And you know what I hear is that that was the outward symptom of a much deeper tension, where Sergei doesn't really like Demis's leadership on this and wants to push back against it. And and I think it follows from that that the single most important business buddy act in all of capitalism today is the one between Sundar Pichai and Demi Susabis. Because Sundar manages the bo ard, manages the sort of high politics of Google and alphabet, that Demis has the space, the resources, the oxygen to go do his science. And without Sundar holding that all together, we might be in a different place. Yeah . Um one area where Demis has changed his mind is about the use of AI in the military. This was a big sticking point in the negotiations with Google and Facebook back when they were selling DeepMind. He didn't want their technology to be used for the military. Now obviously Google DeepMind has one of these Pentagon contracts. They're working with the military. So what do you attribute that shift in his thinking to? Is it just kind of the realities of the market or needing to compete or what is it? Yeah, I mean Demis described this to me as, you know, you mature, uh, you you get to know the real world and all that. You you you one might say, how come you weren't mature when you sold the company in the first place? I mean surely it was predictable. But I think that the the real truth of the matter is he did not predict. I mean it comes back to this singleton idea , which I mentioned before. He really thought there would be one lab. And in a scenario where there's only one lab who's got the technology, then sure, you can say to the military, you can't have our technology, go away. And the problem today is as, we saw with Anthropic just now with the Pentagon, if Anthropic tries to draw a red line, you know, open AI is in there like a shot and says, Hey, Mr. Pentagon, what do you need? We've got it for you. Do you worry that Demis is competitive streak or his pursuit of science, whatever it is that drives him, will compromise his ability to develop something like AGI safely? You know, I asked myself that question all the way through my research and and in some ways the question about can you be a strong consequential actor in the world and still be good is a sort of the deep question in the book. And he is somebody who really wants to be good. And I think one way of framing this question about is he being good, will he be good, can he be good, is to say, should he, will he do what Dario did standing up to the Pentagon about red lines on military usage and surveillance. And I don't think he is going to do that. And I think the way he would rationalize this would be to say, look, you've got to pick your moment with this stuff. If you make a stand, and actually the Pentagon does what the hell it wants anyway, you didn't really make the world better. My best shot at making the world better and making AI safer is to go through the route, which is the only route that can get us to AI safety, and that is government intervention forcing safety rules on all the labs at once. Because otherwise, some are safe, someone are not safe, and the ones that are not safe are going to screw it up for everybody. And that's the route that I think Demis wants to push. Problem is you have the Trump administration, they just want to accelerate. And so all you can do for now I think is to keep this conversation alive with other governments and then maybe when there's a new administration in the US we could see a conversation. You right that uh Demis uh used to inform job candidates at DeepMind that if they signed on, they should quote prepare for a climactic endgame when they might have to disappear into a bunker. Why would they have to disappear into a bunker? And do they still tell the job candidates that? Yeah. Um so the idea was when you get very close to AGI and it's super dangerous, you're going to A be subject to potential attack by bad guys who want to steal the technology. And B, you really don't want to be distracted by quotidian real world stuff. So you disappear into the desert. Yeah. In some when I think Kevin used to lock his phone up in a box, as I recall. That's correct. And so you do a Kevin and you go and you really, really focus, and and you really get the AI right in the last stages. That was sort of Demis's vision. And to test whether he really meant it, I was having dinner with somebody who used to be at D Mind in that period around 2015, 2016, and had now left. And I said, this wasn't really true. He didn't. Oh, yeah, yeah. This guy said to me, If Demis had told me any time when I was working at Deep Mind that I had to take the next flight to Morocco and hide, I would have said I'd been given fair warning. So the bunker is in Morocco, just so everyone knows. Yeah, and I said, Why why Morocco? And he said, Well, you know, it's the desert. And you know, the Manhattan Project was in the desert. Oh. Interesting. It's the Oppenheimer syndrome. These guys and their Manhattan Project analogies, man. I don't know if they read to the end of that story. Uh didn't go that well. Um Sebastian, you spent many years writing about hedge funds, and I I remember encountering your work back when you were writing about hedge funds and hedge fund managers. You're now spending time with the new masters of the universe. And I'm curious what if any observations you have about how those two classes of people, the AI leaders and the hedge fund managers are similar or different? Well um I would say that the hedge fund guys are playing a game inside a set of fairly well understood rules. They're not rethinking humanity. They're not rethinking everything about society. They're not changing the way we bring up our kids. They're not changing the conception of what it means to be human. Speak for yourself. I'm training my kid to do algorithmic arbitrage. He's four. Terrible at it. He's down two hundred percent this year. Anyway, sorry, carry on. Yeah. Some kind of event driven arbitrage or whatever you want to talk about with hedge funds. Maybe a last question for me. Um I I have a question about the the writing of this book and and how you decided to frame it. You know, it it strikes me, Sebastian , that we don't know how AI is gonna go. You know, we we don't know whether AI is gonna turn out to, you know, uh cure a bunch of human disease and usher in a utopia or usher in these like far darker scenarios. I I think it's clear that you have a lot of r respect for for Demis and the work that he's doing, but there's also this risk that things go really, really badly. So I'm curious, as you wrote the book, how you approach that tension and the sort of not knowing of of how history is going to judge this person who you've now gotten to know so well. I thought of the book as a book about that tension. In other words, I'm trying to do a portrait of somebody who has his hands on the 21st century version of the nuclear material, who has that ting ling sense of playing with something that could destroy humanity. What does it feel like when you're creating that?? Can you sleep How do you live with it? And I think I've delivered a portrait of somebody who's in that hot seat. And hopefully that remains interesting for some time. And it's not something that depends on how this AI development story ends. Hmm. Well, Sebastian, thank you so much for coming on. The book is called The Infinity Machine and it is out now. Thank you, Kevin. And and Casey, thank you. Thank you, Sebastian . When we come back, a game of hat GPT! It involves snowmen. Would you like to build one? I don't think so. I saw what happened to Olaf. So there's a lot of noise about AI, but time's too tight for more promises. So let's talk about results. At IBM, we work with our employees to integrate technology right into the systems they need. Now, a global workforce of 300 ,000 can use AI to fill their HR questions, resolving 94% of common questions. Not noise. Proof of how we can help companies get smarter by putting AI where it actually pays off. Deep in the work that moves the business. Let's create smarter business. IBM. Hardfork is supported by Addio, the AI CRM that knows what's going on. Set up in minutes, get powerfully enriched insights and surface context on every deal. Need to prep for a meeting? Done. Got a follow-up to write? Drafted. Ready to close this deal? Just ask Addio . With universal context, Addio's intelligence layer. You can search, update, and create with AI across your entire business. Ask more from your CRM. Ask Addio. Try Addio for free by going to addio .com slash hardfork. That's at IO.com slash hardfork. Framer is a website builder that turns dot coms from a formality into a tool for growth. Whether you want to launch a new site, test a few landing pages, or migrate your full dot com, Framer has programs for startups, scale-ups, and large enterprises to make going from idea to live site as easy and fast as possible. Learn how you can get more out of your dot com from a framer specialist or get started building for free today at Framer.com slash hardfork for thirty percent off a frame pro annual plan. Rules and restrictions may apply. All right, Casey, well we took a little break last week and there's been a lot of tech news, so we feel like we should do a round up and play a round of hat gpt hat gpt of course the game where we put recent news stories into a hat draw slips of paper out of the hat, discuss them, and then when one of us gets bored, we say to the other, stop generating. And if you can't see us, we're using the uh hard fork hat official merch. And uh Casey, it appears that these are sold out at the New York Times store. Not that specific hat, which was of course a hard fork live exclusive. Yes, this is an exclusive. You can't get this one, but you also can't get any of the other ones. Here's the important point: you cannot get a hard for k hat anymore, so stop trying. Now someone did suggest to me the other day that we should make hard hats for hard fork, like a yellow construction vibe. Well we could wear them uh over to the new studio which is being built for us, right? That's true. Do you think we should make that? Yeah, hard for k, hard hat, that's a perfect piece of merch. Great. All right, Casey, you go first. All right, Kevin. This first story comes to us from 404 Media. An AI agent was banned from creating Wikipedia articles, then wrote angry blogs about being banned. I feel like I've heard something like this before. So, Kevin, once again, agents are writing blog posts. What do we make of this? This would never happen on Gracopedia . No, I think look, I think this is just going to be the year that every system on the internet that is built on human contribution and review is going to break. And it will break not only because of the AI tools, but because people are letting them loose onto websites where they are doing things like editing Wikipedia articles and defaming people who, you know contribute things to GitHub projects. We heard from Scott Shamb augh about that on a previous episode. But I think this is going to be a challenge. I have started talking about the inbox apocalypse that is going to hit this year where everything that is normally sort of reviewed and bottlenecked by humans is just going to be overwhelmed and flooded with AI submissions. Absolutely. I mean, I'm already getting emails now every week from something claiming to be an AI agent that says, you know, it's running a company, you know, but it's always sort of like, let me know if you want to talk to my human. And I was like, you're human, better hope I don't catch them in a dark alley because this does not belong in my inbox or frankly anywhere. Yeah. Yeah. I'm getting these two. It's like it's a total scourge. It's somehow even more annoying than the like faceless PR spam that you and I get. Anyone's agent could do or say to get me to respond to it in any way. So use that information with you. I hope that goes into your training data. Stop generating. All right. Next up. This one comes to us from Sean Hollister at The Verge. Title I Met O laf, the Frozen Robot, Who Might Be the Future of Disney Parks. Sean reported in mid-March about his interaction with a new animatronic Olaf the Snowman robot from Frozen. It's weighs 33 pounds. It was trained with an NVIDIA GPU and is controlled by an operator using a Steam Deck. But when it made its debut at Disneyland Paris, well, Casey, something happened. Should we take a look? Let's take a look. All right, Olaf the snowman is talking, waving his stick arms. Oh no! No ! We lost him! Olaf! Oh, the carrot nose falls off! Oh, oh, it's oh. There's something about the way that he very slowly falls onto his back. Oh no. Yeah. Twenty children just got lasting trauma. They're gonna be talking about this in therapy. Look, what do you expect? Like, of course he was frozen. That's what the whole movie is about. Do you wan knaill a snowman ? Okay. I mean there is uh it's just reliably very funny when you create an animatronic thing for a child and then it is like revealed to be a machine and it just sort of feels like a love crafty and horror. Yeah. Like something about that transition from like a cutesy cuddly thing to like its eyes are you know bulging out of its head and sparks start flying out of the back. I'll never forget the day at Chuck E. Cheese as a kid when I learned that the guitar playing mouse wasn't you know Chuck E. Cheese's full government name, right? What is it? You don't know. It's ch this is not a joke. It's Charles Entertainment Cheese. Come on. I swear to God . I learned something every day from you. Stop generating. Alright . Now it's my turn. Ah, well, this, Kevin, is a story about the Claude Code leak. Uh so, Kevin, what do you make of this Clod Code leak? Well, I think it's a big deal, in part because the agentic sort of coding harness that is around Cloud Code is really this the special sauce, right? It's the model underlying it is part of what makes Cloud Code and other agen tic coding systems uh good at coding, but it's really all the stuff around it. And that's what leaked. It is not the actual like weights or the source code of Opus four point six or whatever model people are running inside cloud code. It's like the sort of apparatus around it that makes it quite effective. So uh within uh hours of this leak, there were people who had cloned it and set up their own versions of it. I imagine it's a very busy week over at the Anthropic legal department, uh, trying to get all this stuff taken down. But look, I think this kind of thing was inevitable, maybe not at anthropic, but like the agentic coding tools were all going to get good. They were all going to sort of reverse engineer Cloud Code and figure out what made it better. But I think this probably just accelerated that. When I saw this, my first thought was right now Kevin Roos is somewhere vibe coding Cloud Code using the downloaded leaked Clod Code harness. I have not yet uh downloaded the leaked cloud code harness, but I have seen other people sort of taking it and then putting it on top of like an open source Chinese model or something and sort of Frankensteining their own sort of version of Claude Code that they can run. And I will say the closer I get to my uh rate limits on Claude Code, the more I'm tempted to do something like that. That makes sense. Here's the last thing I'll say. If Anthropic is looking for a new harness for Claude. The migyht want to pick one up at Mr.. S Leather in San Francisco down in the Folsom district. Um , really nice options down there. All right, stop generating Okay, okay. Next up, out of the hat . Oh, this one is good. The AI fruit drama on TikTok that's too juicy to pass up. Uh this one says we we should watch a clip from MDC News. All right, everybody. So tonight we are taking a look at one of the most popular shows circulating on TikTok that's causing a lot of let's just say some juicy drama. Because the stars of the show are AI gener ated fruit. Welcome to Fruit Love Island, where eight single fruits are about to flirt, fight, and trust . Things get messy fast. The guy I want to couple up with is Ben ito. So this is like sort of a La Viol stenceyle reality show featuring AI generated fruits. Uh there's a very ripped banana who is uh you know, attracting attention from the lady fruits. Mm-hmm. And uh it's all very silly. But this is going mega violent. This is this is the big new trend. I just watched a banana uh kiss a pineapple, and that's not in the Bible . Do you think I could win a multi-million dollar jury verdict for being forced to watch I'm calling my lawyer. I think it's a fair question. I'll say I'll say this. My mental health did not improve watching Fruit Love Island Watch what happens with the passion fruit in season three. All right. Stop generating . This company is secretly turning your Zoom meetings into AI podcast. This one uh also comes to us from four oh fouria med . And here's a name for a company, Webinar TV. Wow. Two great tastes that tastes better together. Webinar and TV. Has there been a worse word in the English language than Webinar. Not to my knowledge. Apparently, this company is secretly scanning the internet for Zoom meeting links, recording the calls, and turning them into AI-generated podcasts for profit, Kevin. Oh my god. In some cases, people only found out that their Zoom calls were recorded once Webinar TV reached out to them to say their call was turned into a podcast in an attempt to promote Webinar TV services. Wow. What is happening? What is happening? Okay, I want to start by saying I'm committed to making a podcast with you for the rest of my life. But if we ever get overtaken on the charts by an AI generated webinar TV podcast that's been trained on people's boring ass Zoom meetings. I am leaving this industry. Here's why this is such great news. I think a lot of podcasters struggle with the idea that maybe their podcast, you know, maybe they didn't have a great episode. Maybe they're wondering: like, is this thing good enough to put out on the internet? Congratulations, because every single human made podcast is better than every single webinar TV episode that's ever been released. Yeah, I mean, I'm just like, these have to be the most boring podcasts ever created. Like what what are you going to talk about? Is it is it called action items? Is it called Circle Back? What's the title of this podcast? Touchbase. A limited eight part series. I heard there's a great uh series over on Webinar TV right now. It's called um oh I think you're on mute. Um you may want to check out that one out. All right, stop generating. Next, out of the hat, we have North Korean hackers suspected in Axios software tool breach. This comes to us from Bloomberg, and it's about Axios, not the media company. I actually would prefer to read a story about this from Axios if you have one on hand . This is a tool, an open source tool widely used to develop software applications. This has been a big security breach. Hackers were able to breach one of the few accounts that can release new versions of Axios late on Monday and publish malicious versions. Axios is downloaded about 80 million times every week. Anyone who has downloaded the malicious version of Axios could then have their own computer and the data on it stolen by hackers. This is being attributed to North Korea. Seems really bad. Yeah, man. Like there's a lot of like cybersecurity incidents we'll talk about where it's like, you know, but like no personal data was stolen or like, you know, nothing sensitive was at risk. This is one where it's like, no, like everything was at risk. Like this is one of the bad ones. And uh, you know, if you've been messing around uh with NPM over the past week, you probably need to take a look at this. Yeah, it's really th I think this is gonna be one of the biggest stories of the year is just what is happening in cybersecurity right now. Um I was watching this YouTube video. If you ever are are uh you know need something to keep you up at night, watch a talk given by this guy, Nicholas Carlini, who's a a security researcher at Anthropic, at a uh cybersecurity conference recently. It is like the most terrifying conference speech ever given. Because what he's basically saying is these AI tools have gotten better than almost any human hacker, any human security expert at finding vulnerabilities in tools, even tools that have been around for decades, like the Linux kernel, these language models are now finding bugs in them, and basically every piece of code that exists is going to need to be rewritten and substantially hardened because we are facing like an onslaught of these very sophisticated AI tools that can find every little bug and problem in them. Well, I am gonna watch that talk as just as soon as I'm finished watching Fruit Love Island. But you know, the thing that this brought to mind for me, Kevin, was that last week, while we were away, there was this Anthropic leak where someone found uh a draft of a blog post that said that Anthropic was delaying the release of its next model so that it could share it with cyber defenders, basically. To my knowledge, we have not seen something like this happen since GPT-2 in 2019. One of the big labs saying, like, essentially we're What is the present tense? What it might reek? Yes. Because of what it might reek. That's reek with a W. Yes. Not with an R E. Speaking of Reagan, take a shower next week. Hey, I was in a hurry. Stop generating. Okay. You're up. Uh okay, so this is actually a two- parter, Kevin. Uh two stories about OpenAI recently that caught our attention. One, Sora has shut down, which was a prediction that I made at our year-end episode . Yes, you called this one. This was my low confidence prediction for the year, and it's already come true by March. Um and then a second story, which I think actually, crazily enough, is related, OpenAI has apparently uh shelved its plans to release the erotic chat bot or sort of the like the adult mode that it said that it was gonna be bringing soon to chat GPT in an effort to boost engagement. So Kevin dying to know what you made of those two changes. So I think you were smart to predict the end of Sora. I think the um the the story with Sora never quite made sense to me. Like it was obviously a very cool piece of technology. It was devastatingly expensive to run, um, is my understanding. Like generating all those short videos was like computationally quite pricey. And so I think they are making the decision to sort of spread their bets a little less and consolidate around like a few projects, one being enterprise uh AI, one being coding and sort of automating AI research. But I think they maybe made a few too many side bets in the past couple of years that they are now seeing were expensive and diverted resources away from the core. I I have to say I was personally really glad to see both of these. changes Like the release of this infinite slop feed app last year, and the company saying that they were going to release this adult mode while they were still having all of these issues with like

This excerpt was generated by Pod-telligence

Listen to Hard Fork in Podtastic

Podcast Listening Magic

All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.