HA

Hard Fork

The New York Times

One Good Thing: Moon and Weather

From Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good ThingApr 10, 2026

Excerpt from Hard Fork

Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good ThingApr 10, 2026 — starts at 0:00

I'm Dane Brugler. I cover the NFL draft for the athletic. Our draft guide picked up the name The Beast because of the crazy amount of information that's included. I'm looking at thousands of players, putting together hundreds of scouting reports. I've been covering this year 's draft since last year's draft. There is a lot in the beast that you simply can't find anywhere else. This is the kind of in-depth, unique journalism you get from the athletic and the New York Times. You can subscribe at ny times.com slash subscribe. Casey, I got a haircut yesterday. Thanks for noticing. Kevin, it looks extraordinary. Have has this ever happened to you? I went into the barber. I sat down in the chair. He did not ask me what I wanted. He just started cutting. Has this ever happened to you? No, because they know I'm not straight. With a straight guy, you don't need to ask them. You just get the standard haircut that a man gets. He one-shotted my hair. He said, Yeah, I've seen this before. I know what I'm doing here. Whereas if I walk in, it's like, okay, let me get out the schematics and have to walk a little bit. So like it's not like he knew me. See, this is exactly is like the fact that you just go to random barbers and will accept whoever happens to be this is why they can just start cutting your hair. Oh, who is yeah, I know I don't know this person. Yeah, do whatever the hell you want. See if I care. Yeah, yeah. That is the straight approach to hair . But it's working great for you. Thank you. Yeah. Appreciate it. I'm Kevin Roos of TechColms at the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, the dangerous new AI model that has cybersecurity experts on high alert. Then, New Yorker writers Ronan Farrow and Andrew Morant join us to discuss their spicy new profile of Sam Altman. And finally, it's time for one good thing. Although I guess really there are two things in the segment. Yeah, we should really rename the segment. Okay . Casey, we have a big announcement. Kevin, what is the announcement? We're ending the show No. You're finally free, America. Yes . No, on June tenth in San Francisco, we are doing the second ever installment of Hard Fork Live. It's too fast, it's too furious, and it's happening. I tried to let them uh get them to let me call it uh too hard to fork. Um, but they decided that was not appropriate. Kevin, where can people get more information about Hard Fork Live 2. Okay, it's happening on June 10th in San Francisco at the Blue Shield of California Theater. Bigger venue than last year. Tickets will be on sale at ny times.com/slash events. Not to Not today, but next Friday, April 17th. So we're giving you a full week to get your act together, reach out to all your friends, use Meta AI to plan a trip to California . Use cloud code to build your scraper bots to scoop up all the tickets. And on Friday the seventeenth, you can buy tickets. Yes. And let and we will just say in advance, last year the tickets did sell very quickly. They did. So get in there quickly if you wanna gum. There would be more tickets available, but Kevin reserves fifty for quote his team, which I don't even know what all these people are doing at this point, but they'll be there. So get your tickets next Friday, April 17th at NYTimes.com slash events . Well, casey, as you know, on this podcast, we have a rule about discussing AI models called ship it or zip it. Ship It or Zip It. Unless you're actually putting it in people's hands, we usually do not want to hear about it. Yes, but today we are making an exception for the new anthropic model Claude Mythos Preview that just was announced, but not released for reasons that we will talk about. But first, since this will be a segment and a show about AI, our disclosures. I work for the New York Times suing open AI And my fiance works at Anthropic. Casey, this is, I wanna say, like the biggest story of the year in AI. I know there's been a lot of AI news. I know that people are probably saying, Oh, here they go, talking about another model again. I am telling you this is something that people need to be paying attention to because of the implications, because of the way it was rolled out, and because of the model itself, which we will get to all of that. But do you agree that this is a big deal? Well, you know, when we were talking about the show this week and we were kicking around the idea of like, hey, exactly how big do we think this is? You pointed out that one question people have been asking this week is are we gonna have to rewrite all software? And I feel like usually when folks are kicking that question around, it's a big story. Let's just talk through what was actually announced this week. So on Tuesday, anthropic announced that it was starting something called Project Glasswing. The name Project Glasswing refers to the glasswing butterfly, which has transparent wings and so it can hide in plain sight, and that is thematically important for reasons that we will come back to. It's also a delicacy in some countries. I've never had glasswing butterfly. notably, they are not releasing this model to the public because they claim it is too dangerous to do that. Instead, they are giving access to a consortium of tech companies, including Cisco, Broadcom, sort of makers of internet infrastructure, as well as Microsoft, Apple, Amazon, basically every big tech company that is not open AI or Meta is getting access to this model, but not general access, just access to do defensive cybersecurity testing. Basically to go out and harden their systems and their infrastructure and their software before the general public can get its hands on this model. So what, are some examples of what Mythos was doing in training that so alarmed Anthropic that it came to this point? So, Anthropic has been running this model internally for several weeks now, and they claim that this thing has found vulnerabilities in every major operating system and web browser. They gave some examples that have already been patched. One of them was that this model apparently found a 2 7-year-old security flaw in OpenBSD. OpenBSD is an open source operating system that runs on firewalls and routers. It is sort of like a critical security layer on the internet. And it was designed specifically to be hard to hack. Right. And this model, because of its advanced coding and reasoning capabilities, was able to find this bug that 27 years worth of professional security researchers had not been able to find. What else? Another example was that it found a bug in a piece of popular open source video software called FF mpeg that had, according to Anthropic, been scanned for bugs five million times by automated security tools without finding this critical exploit. And that's why it's important to always look the five million and first time because you might find something. Now, Casey, I I think for people who are not cybersecurity experts, it might be worth sort of sketching the context here for like how software works. Yes. So you know, every piece of software, every operating system, every app, every web browser that people use is built on a mixture of tools. Some of those tools are proprietary to the companies that make the software. Some of them are sort of shared open source tools that are just in everything. Companies will just grab this open source thing and plug it into their thing. Because that's compatible with everything else. Saves you a lot of time and trouble. It's already been security tested by decades sometimes of researchers. And this is sort of a big piece of kind of the foundation layer of the internet are these open source software projects. What is happening now, according to Anthropic, is that they can basically use this model, Claude Mythos preview, to sort of proactively go out and find all of the unfound bugs. They call these zero-day ex ploits with a sort of speed and efficiency that no human security research team could do. Yeah. And you know, I would say that it can be difficult to talk about cybersecurity in a way that resonates people for a couple reasons. One is just that cybersecurity as a field exists essentially almost entirely to alarm people and say, here are a bunch of problems and these are really scary. You know, I hope that folks in the cybersecurity field would not mind me saying, like it it is just like kind of an alarmist profession and that when I've talked to these people over the past fifteen years, they've been telling me like, look, the entire internet is held together with spit and glue, and we're very lucky that there hasn't been a catastrophe yet. Okay. So after all of this news came out, I was like, I I want to talk to some people who are at least not working for Anthropic or this consortium to try to give me a gut check on how big a deal this is. And so I talked to Alex Stamos , who formerly led security at Yahoo and then Facebook. And Alex said, like, yes, this is a big deal. And he was hoping for a long time that we would see a consortium come to gether like this because of exactly what you just said, Kevin. The intelligence in these machines and their ability to work autonomously are now great enough that they can chain together exploits that human beings either would never see, would take them a long time to see, or they would just never get to because we're we're limited in ways that these machines are not. So that got my attention. Now we should also talk about like, what the strategy is here from Anthropic, because I think a lot of people see an AI company that is known for sort of being alarmist about safety, say, we've created this powerful, spooky new model, and we're not gonna show you because it's too powerful and spooky as some kind of marketing tactic. So I think we should just say like that is not to my understanding the case here. No. In my mind, it is obvious why. Like if you're a corporation and you release a tool and people with no real technical expertise are able to use it and within a few hours discover a novel exploit in the Linux kernel and then take over other people's machines to cause crimes , you might be held liable as a corporation. You will get in trouble. Like there will be congressional hearings. So companies just in their rational self-interest do not want to sell cyber weapons on the open market. Yes, it's also like if this was a marketing strategy, it is a horrible marketing strategy . Like the government already thinks you're a bunch of panicky doomers. Yeah. You have a new model that you claim is the most powerful model in the world. So instead of selling it, you give a hundred million dollars of Claude credits away to a consortium of companies that includes many of your competitors, which is what Anthropic is doing. That is not how I personally would market a spooky new model if I were in the business of marketing spooky new models. Yeah. Now, look, it may be that despite everything that we just said, there is still some marketing benefit to anthropic from doing this, right? Like we know that they saw a huge increase in their revenue after they took that stand against the Pentagon. And that is in that stand, they said, like, we are determined to do things in a really safe way. It seemed like the business world really liked that. And so I could imagine there being a business benefit to Anthropic of coming out and saying, we have the most powerful model in the world and we're not releasing it. Like, yes, I'm sure that there are plenty of businesses that are salivating over uh the chance to get their hands on Trevor Burrus, Jr. But they can't unless they are part of this consortium. So they are at least claiming that they are trying to get ahead of what they envision will be a reckoning was what was the word they used for cybersecurity . And it seems plausible to me that in the next kind of six-ish months, every major piece of software in the world is going to need to be patched, rewritten, and re-released. So just an absolutely massive project. Let me ask you this. You know, Alex uh Stamos, the security expert that I mentioned, told me that he sees essentially like two broad possibilities. One is, and this is the the good scenario, there are a finite number of critical bugs and vulnerabilities to be found. And that maybe if we all work really, really hard over the next six months or however long it turns out to be, we will be able to patch those vulnerabilities and our infrastructure will remain safe and stable. The other possibility is that this model is already good enough that it can just simply invent exploits that we never would have thought of. And so this will essentially just be a really, really big problem that potentially just keeps growing in scope because you know maybe eventually you hit some sort of true super intelligent point. So I'm curious if you've talked to people about what they see the scenarios are and if you have any thought as to which of those two is more likely. So I think it's possible that they will patch the sort of top one percent of critical software, right? The stuff that everyone knows is important. Your Linux, your uh you know, your your very popular open source libraries, your routing equipment and networking equipment. Like it seems plausible to me that a couple of companies with the right resources and the right models could like find and fix the worst security vulnerabilities. But I also talked to people who were telling me that it's not as simple as that because once you get outside that kind of top one percent of critical infrastructure, there's just a lot of machines that are running on old code. Right? So it's it's theoretically possible that all of these fixes could be submitted to the people who maintain these software projects, but that A, there aren't enough humans to review all of the proposed bugs and fixes. So that just serves as a human bottleneck there , or that there is just a lag in the time between when a piece of software is patched and when the person running the router at the medium-sized business in Tulsa decides to update the firmware or install the security patch. So people are can expect a lot of like apps that are asking them to like update their software or reinstall their software over the next few months. Um, I've started getting a few of these already. Have you started getting these? Um yeah. Yeah. So I think this is going to be a kind of forced reset for the entire cybersecurity industry and a very significant event in the history of technology. Yeah. Well, and just to make it concrete, like we are currently at war with Iran, and Iran is currently hacking our critical infrastructure. There's a story in Wired this week about them successfully hacking like water and energy infrastructure. Right now, they're able to do that without a mythos quality model. I would be quite nervous about what they could do if something like that fell into their hands. So this really is not an abstract concern that we're laying out. Right. And we should talk about this government piece of this because one weird characteristic of this moment is that this very powerful advanced model that Anthropic claims is capable of doing autonomous cybersecurity research and attacks is also a company that the U.S. government has spent the last several months trying to kill. Yeah. Yeah. And has tried to declare anthropic a supply chain risk. They have ordered all federal agencies to stop using Claude. And so my understanding is there have been some conversations between anthropic and parts of the, you know, sort of national security establishment and and apparatus about this model. But it is also simultaneously true that they cannot use this model without sort of running a foul of the administration. So a private company right here in San Francisco currently has a technology that they claim is capable of finding critical security es vulnerabiliti in every major operating system and web browser in the world. And the US government, to my knowledge, does not have access to this technology. Yeah, it does seem like something that like our national security infrastructure would want to have access to. One more piece on the on the regulatory front. It is crazy to me that model development of this scale and seriousness remains essentially unregulated in this country, right? Here you have a private company saying, Well, we have now created software that create can create so many different kinds of novel exploits that all software might have to be rewritten. And they are not really under any kind of regulatory regime. And the regulatory regime that the previous administration tried to put into place was thrown out by the current one because it might harm American competitiveness. So I just want to say that makes me really, really uncomfortable. I think that if you're making stuff this powerful, regulators ought to be paying attention. Yeah. One interesting sort of historical note that I'll make here is like for the past few years at least , there has not been a signific ant gap between what the AI companies have built internally and what the public has access to. You know, maybe there's a slightly better model that the comp anies are working on that they, you know, need to spend a few months testing before they release it. But like or it runs a little faster than the one that you have access to. Yeah. But there has not been kind of a significant gap since I think GPT -2, which was in 2019, which involved some of the leaders of Anthropic who were then at OpenAI, who made a decision to hold back this model, GPT-2, out of fears that it could be used for things like automating propaganda and misinformation. Right. In reality, it could barely write a limerick. Yes. They aired on the side of caution. They did, and they got a lot of crap for that. People sort of said, oh, you're using this to hype some of the same stuff we're hearing this week about anthropic. And I think in that case, they were, you know, probably a little over excited about what this model could do, but they wanted to make sure that they weren't wrong. And so they held this back. And that created a gap of at least a, you know, a couple months to maybe a year between what the average person could see and what was happening inside the AI lab s. That gap is now open again. There is now a model that you and I cannot use, that our listeners cannot use unless they work at one of these companies in cybersecurity defense, and what the AI companies are claiming. And I I think as hostile and suspicious as people feel toward the AI industry, that only gets worse if they think that there are secrets being kept in a basement that they can't access. And I think that it creates paranoia and fear. I think that it is generally responsible to have transparency from the AI companies about how capable their models are. And I understand in this case that Anthropic felt like it had to make an exception, but um I i I think this this this gap may be here to stay is is the thing that I'm wondering about. I I think it probably is. I mean it's worth saying that anthropic was founded on the idea that if it could build models that were at the the state of the art at at the frontier, uh that it could have some influence over that frontier and it could guide it to a safer place than it otherwise might have gone. To me, the Pentagon fight and now Mythos are examples of that thesis in action, right? Where it made the best model and that gives it some room to try to do a little bit of good. So, you know, blocking domestic surveillance and autonomous weapons for a little while, or preventing uh bad actors from getting their hands on, you know, uh tools that could could cre create novel exploits. Um at the same time, in order to do that, they had to build the model in the first place. And there is a risk that there is some sort of uh, I don't know, intellectual property leakage, uh, that sort of somehow all of the innovations that they're building are going to trickle down into other places. And my fear is just that it becomes this sort of self-fulfilling prophecy, right? Where we have to build this frontier, even though it's dangerous and we're gonna guide it to this safer place. But you know, you did build the thing in in the first place. So I just like reminding people of that tension, because it is not actually inevitable that we build these systems um and yet we do often act as if that were the case. Yeah. Last thing, a lot of the people I know who are plugged into the cybersecurity world are being asked right now what people should do uh about their own security if they are worried that models like this will become public it's should they be like locking down all their accounts and moving their cryptocurrency into cold storage? Like what what do you think people should be doing uh in anticipation that something like this will become public? You know, it's funny, I had a friend ask me that just this morning as I was preparing for the podcast, and I said, you know, a couple of things. Like one, to some extent, we're just going to have to wait. I mean, to the extent that any of what we've just described is good news, it is that the defenders appear like they're going to have some runway to fix some really bad problems before the bad guys catch up. So I think we should give them a little bit of room to see what they can do. If it does emerge that there is a similar model that can wreak havoc, like rest assured, there will be segments about it on hard fork and we'll have some updated guidance. But I asked my friend, do you have a password manager and do you reuse passwords for the same thing? And she said, you know, I've never really been able to get one of those uh password managers to work for me. And I do sometimes reuse my passwords. So I said , like, look, if if you're looking for something that you can do, just make sure that you have done your basic online cybersecurity hygiene. You should use a password manager. I use one password. There are many of uh others out there that are just as Don't use the same password for anything. Your passwords should be randomly generated and not, you know, the name of your pet or whatever. And then use multi-factor authentication where you can, right? So don't let anybody get into like your Gmail or your banking account just by typing in eight letters. You should also be, you know, using an authenticator app. Um and so that those are some of the basic things that I would tell people to do, Kevin. Yeah. I would I am planning to deal with the uh possibility of a massive cybersecurity breach by just sort of selectively dribbling out incriminating things about myself. Okay. Uh just sort of trying to get ahead of any hacks that might expose my you know, emails going back decades or anything like that. So I'll just say in that spirit, I used to like the black eyed piece. And I still do. Let's get it started. Now that was a critical vulnerability that I just exposed. When we come back, we'll talk to New Yorker writers Ronan Farrow and Andrew Morantz about their investigation into Sam Altman. I also sent them some stuff about you. Oh boy. Hi, I'm Solana Pine. I'm the director of video at the New York Times. For years, my team has made videos that bring you closer to big news moments. Videos by Times journalists that have the expertise to help you understand what's going on. Now we're bringing those videos to you in the watch tab in the New York Times app. It's a dedicated video feed where you know you can trust what you're seeing. All the videos there are free for anyone to watch. You don't have to be a subscriber. Download the New York Times app to start watching. Well, Casey, the talk of the town in San Francisco this week has been uh well there have been two talks of the town. One we already covered in our A. That was the Claude Mystery. This town conducts multiple conversations at the same time. Amazing at multitasking. Yeah. The other big tal ker this week has been this big piece in The New Yorker about Sam Altman. Yes, more than 16,000 words devoted to a question that has come up once or twice on a hard fork, Kevin, which is can Sam Altman be trusted? Yes. The writers on the piece are Ronan Farrow, uh famous for his work on the Harvey Weinstein investigation and others, and Andrew Morance, who is a good friend of mine and a longtime writ er at the New Yorker. They worked on this piece for a very long time, talked to many, many people in and around Sam's orbit, and tried to answer the question of like, who is this guy? Yeah, and also why does that matter, right? We're talking during a week where these systems have arguably experienced a step change in what they can do. And I think those kind of advances just naturally should draw more scrutiny onto the people running these companies. What do they know about who they are, how they operate, are they honest with each other? And this piece offers one of the more comprehensive portraits that we have had so far, I would say, on that question. You know, Ronan Farrow investigating you has to be one of the the the scariest experiences. It also it but it seems hot too, you know. That that's what everyone wants is just a really handsome man asking them a lot of questions. You know? Okay . So let's bring in Ronan Farrow and Andrew Morance . Ronan Farrow and Andrew Morantz, welcome to Hard Fork. Thank you guys. Happy to be here. I mean, truly longtime first time. And in fact, I brought receipts to that effect. I'm this is your show. You can take or leave this in the edit, but I wanted to show what a devoted longtime fan I am of Hard Fork. I know the show well. I know you guys like merch. And I know you guys like disclosures, but you don't have any disclosure merch to my knowledge. So I had these made for you. Come on! One for each. One for you, one for you. I'm gonna put it in the mail after we get off. But one one of them says I work for the New York Times, which is suing open AI, Microsoft in Perplexity for alleged copyright violations. The other one says, and my fiance works at anthropic. Oh my gosh. That is amazing. So I think time limited. It's gonna be a time capsule. But I mean, made at the print shop in Brooklyn, one of a kind, exists nowhere else on earth. That's incredible. You are also I think I should also. Uh and I gave you one at your wedding, so I think we're we have a sort of a theme going on here. Okay. Right. Well and that's also our disclosure, which is that Kevin and I are buds and have known each other for ever. So actually, Casey, you can come to me anytime. I know you guys like to rib and roast on the show, so you can come to me behind the scenes for any roastable Kevin material. My dream has been to get the New Yorker to investigate Kevin Roos. So you guys really could not have come along at a better time. We're on tempt us. Yeah. I'm not picking up the phone. Okay. Let's talk about this big piece that you both just published uh in the New Yorker. The title of the piece is Can Sam Altman Be Trusted? Now usually there's this sort of folk uh rule about headlines that end with question marks, which is that the answer is always no . So I want to put this question to you. Can Sam Altman be trusted? Well I think one important thing to note is the piece is really forensic and even and actually to a point where I've been happy to see there's a range of reactions, right? There's people who have answered that question uh in a very severe way and looked at the fact pattern that is laid out here and the documentation that's laid out and said, you know, this is someone who poses an acute danger and should be kept away from an authority position. And then there's people who, I mean, hilariously enough, my mother called me and she's like, you know, I kinda like him. And and so I I think that is a true reflection of our intentions. Um in this case, as you might imagine, there was deep consultation with all of the subjects of the reporting to really understand their feelings. And any time we thought there was a persuasive argument from Sam or anyone else that something shouldn't make it in or something would be sensationalist, we really carefully discussed that editorially. So the result is very even, and I would say on the question itself , what we lay out is something that is remarkable, I'd say, even against the backdrop of the culture of mistrust in Sil icon Valley, where everybody understands and expects, right, that being a founder means telling different audiences different things at times to some extent, where everyone understands that the entire enterprise is building based on hype long before there is actual actionable deliverable product. Even against that backdrop, there is an extraordinary preponderance of people who emerge from interactions with Sam Altman, including close years-long ones, with really active complaints and allegations that he lies repeatedly about things big and small. Well, one of my favorites was when you quote him telling you that he wears a gray sweater uh every day to avoid decision fatigue and then he shows up for his next interview in a green sweater. That felt like a really sad thing. I appreciate that eye for fashion that you so rarely get in these tech profiles. Andrew was our fashionista uh in the in the writer's room. Always but that's the kind of thing where, you know, we didn't wanna make too much of that, right? Because it's like oh, we caught you in this deep hypocrisy of, you know, choosing a green sweater like and this is consistent with a lot of the things people say uh throughout the piece and throughout the career of Altman and OpenAI is that there isn't this one smoking gun thing where he's like caught, you know, with his hand in the cookie jar. It's this sort of allegedly longer, more subtle accumulation of facts, which my kind of like glib and you know annoying way of describing it is like the fabled memos and documents that were compiled that led to him being, you know, fired in 2023 and that have kind of dogged him throughout his career, they really shouldn't have been like a secret bullet pointed list. They should have been a 16,000 word New Yorker pie Because when it they only really make sense when you like lay them all out together in narrative form. Aaron Powell Yeah. I mean you guys mention in your story that there have been sort of these rap sheets that have been circulating about Sam inside OpenAI and other parts of the AI industry for years. One of them was compiled by Dario Amade when he worked at OpenAI under Sam Altman. One of them you said was maybe circulated by some allies of Elon Musk and people who are opposed to open AI. So give us some sort of behind-the-scenes details about what is being said by whom and how and to what ends about Sam Altman in Silicon Valley? Well, it was really important to us to filter for the obvious competitive incentives out there. There are people who are massively incentivized to go after Sam Altman. And the reality is that there are very firmly evidence-based critiques, many of which are promulgated not just by the rivals, although they're certainly amplified by them happily, but also by more neutral figures and people who are just kind of technologists who aren't in the fight. And then there is the white hot center of the rivalry, the stuff you mentioned that I think is in a very different category, which is, you know, Elon Musk and other direct competitors really amplifying everything they can come up with. And in some cases, we document things that are inflated or trumped up or just seem to not be true. So Elon Musk in particular has intermediaries circulating some pretty spicy and pretty unsubstantiated material in Silicon Valley. And we talk about that. I really appreciated that about the piece because this has uh become like more salient over the past year as these rivalries heat up and you hear more and more of these scurrilous rumors. And while I do think this winds up being a pretty damning portrait of Sam on the whole, you do also point out that in some very real ways he's the subject of legitimate smear campaign. Yeah. Oh yeah. I think that's absolutely accurate. And we we were trying not to go in, you know, with naivete of like, can you believe business titans are being mean to each other? But like the level of this really does seem kind of shocking and unprecedente d. And, you know, it's kind of consistent with people who think of this as like, whoever gets the ring first will control the world. Like it just seems like all bets are off. And so as a reporter, it's very challenging to be like, do you bring up the scurrilous rumors to knock them down? And so we we we had like months of conversations about how best to do that. Aaron Ross Powell So there's been a lot of reporting on Sam Altman, especially around the board coup a few years ago . Could you maybe give us like the two or three things that you think are new and important from your reporting um that rise above the rest in terms of people's understanding of Sam Altman and open AI? So I think there are things here that put to rest some of the longstanding rumors, right? I mean Altman has always said uh and Paul Graham at Y Combinator has always said he was not pushed out. He left of his own volition. It really seems from our reporting that that was not the case. They have talked a lot about their um fundraising in the Gulf in the Middle East as innocuous. All businesses do this. It really seems from our reporting that the relationships that Sam has uh cultivated with some Emirati and Saudi royals is deeper than was previously realized. Ronan, what am I missing? There are there are several things like this Aaron Powell We just didn't really know in full what was in the those Ilya Sutzkever memos. Um we didn't really have the detailed, multiple-sourced, heavily documented accounts of the individual proof points that were offered in those memos. We didn't have the contents of those Dario Amade notes. And we didn't have a lot of these people on the record yet. So I think actually, in a way, that was a disservice, not only to Sam's critics, but also to Sam himself, there was a bit of a veil of mystery. And that wasn't purely accidental. One of the things we document that's new here is as a condition of the exit of the board members who had moved against Sam that he wanted out. They insisted on an outside investigation. What happened there is, in my view, quite extraordinary, which is yes, at private companies, sometimes re ports of this type, uh, when a law firm is brought in to restore legitimacy, can be kept out of writing. Often it's to limit liability. And often legal experts say it's a bit of a red flag. This is a different kind of case. This isn't just any private company. This is a high-profile scandal that engulfed Silicon Valley when Sam was fired. Trevor Burrus And ostensibly at a non-profit, at a nonprofit. Trevor Burrus At a 501c3, exactly. And so there were stakeholders, not just in the public, but you know, within this company, that would be the bare minimum threshold, right? Where senior executives thought, okay, we're gonna get some kind of at least detailed summary of what this law firm investigation found when they invoke it to rubber stamp Sam coming back. And instead, what happened was an 800-word press release that said there had vaguely been a breakdown in trust and offered very few other details. And what we report in this piece for the first time is there wasn't a report. For years, people were like, where's the report? Where's the report? There wasn't because it was kept out of writing. And this is no longer just a speculative supposition. One of the two board members who Sam helped select who oversaw this process just explicitly says, well, you know, uh a written report was not needed, is now their line on this. Yeah, I'm glad you brought it up. It was actually my favorite detail in the piece because it was something I'd been curious about forever. I mean, my the thing that I found most interesting from the piece were the people who spoke on the record or at least gave you quotes, and some of them were unattributed about Sam who you know I think previously might have supported him or at least felt like there was no upside in sort of you know talking about him in a negative way in public. There was a a Microsoft executive quote in your piece as saying that there's a small but real chance he's eventually r remembered as a Bernie Madoff or Sam Bankman freed level scammer. There's another unnamed board member who uh said, quote, he's unconstrained by truth um and said that he has uh quote an almost sociopathic lack of concern for the consequences that may come from deceiving someone. Uh I haven't been on a lot of corporate boards, but I think that is something that's quite rare to hear a board member say about a CEO of a company. I'm just curious: like when you were weighing these stat ements, did you feel like there are people who used to be fans of Sam who have soured on him, or are these people who have really held a grudge against him for a long time? The thing that you point out about people changing their tune over time, I think is an integral part of what we document in the piece, which is, you know, the fact that Sam Altman comes up through this Y Combinator world is not incidental. The fact that he has an investment portfolio in by his own estimation, you know, about 400 other tech companies. The fact that he has sat on everyone's board and everyone has sat on his board. I think our sort of line about this in the piece is like: we spoke to people who are Sam's friends, Sam's enemies, and given the mercenary nature of Silicon Valley, some people who have been both, right? So given that that's the landscape, you are gonna have people who change their tune as the the wind blows different ways, and that's a lot of how um Ottman's been able to weather a lot of this stuff in the past. One thing that results from that spread of opinions is to your question about evolving takes on Sam . There's definitely a class of nuts and bolts investors, prominent people in Silicon Valley who are really pragmatists, not just safetyists, and who are growth and business oriented, who told us that at the time of Sam's firing of the blip, they gave him the benefit of the doubt. And especially because of the factor we talked about before, where there just was a dearth of clear information. In that void, a lot of prominent people gave him the benefit of the doubt and saw only upside in bringing him back and removing the board that tried to fire him. There are a number of those prominent people in that category now who say I don't know that I would have given him the benefit of the doubt It just strikes me though that everyone who digs into this winds up coming back with essentially the same story. You know what I mean? It's like there there are not like seventeen versions of Sam Altman out there, like depending on which reporter calls which different source. I feel like we we now sort of know like the broad outlines of this person's psychology. I d I don't know. I I wanna challenge that. Like I I do talk to people who are big fans of Sam, um who some of whom work for him, some of whom don't. Clearly this is a guy who has been able to at various points like lead very important technology projects and like rally people behind a vision. These people are not like mindless sheep, like they're critical and discerning and thoughtful people. So I I don't want to like seem like I'm you know taking Sam's side on anything, but I I just like I I think that there are a lot of people with very strong feelings about Sam Altman, positive and negative. I think the positive side tends to be more like people defending him in private, and the public side tends to be more people criticizing him. But I don't know. I guess for for Ronan and Andrew, like do you feel like there are vocal supporters who you came across in reporting this story who had sort of no direct employment relationship with open AI or Sam , um, or you know, weren't leading companies that he invested in or something who were like, Yeah, this guy seems pretty good and smart and talented. Yeah, it was an eleven year old who used chat GPT to pass sixth grade. Oh my god. No, no. There were legit defenders of Sam on a number of these fronts who we talked to for sure. I I think a lot of this has to do with like what baseline expectation are you starting from? Like if you think of this as a business and you start from the premise that people who run giant successful businesses have to say a lot of different things to a lot of different people. Like, why is anyone even why is this a story? I I think though there's a kind of level setting here where one of the things you can do when you take a big sort of putting everything in one place narrative effort like this , is you can start from the beginning and remember what the original pitch was. And when you go back to what the original pitch was, the defense of why are you guys being so naive? This is a normal competitive business. Like, okay, so when you pitch this as a nonprofit, safety-focused research lab that would aggressively comply with all regulation, like were the people who believed that naive to believe it at the time? Right. So that's when the defenses start to feel a little more like pressured to me. Yeah. Also like for what it's worth, you know, it's like, oh, you know, is it really a story that this guy's, you know, telling different things to so many different groups? It's like that's not like really a story that gets told about Satanadella. It's not really a story that gets told about Sunar Pachai. It's not really a story that gets told about Tim Cook, right? Like there does seem to be something really unusual here. And my question for you guys, now that you've sort of spent so much time immersed in this company, is what do you think it means for OpenAI? Well, I mean, luckily we have a really robust independent tech media to, you know, so I was gonna tune into TBPN and see what their independent journalistic take on this would be. I think the day after our piece closed, Ronan, or something like late last week, OpenAI acquired TBPN, which is this big sort of tech chat show. So that's one aspect of this answer, right? That as OpenAI expands and grows, they seem to be sort of buying up more Aaron Powell Relatedly, by the way, uh a lot of announcements over there right concentrated around when they knew we were gonna be running and right developed in the period where we were in these intensive conversations with them. And many of them sort of pointed at the topics in the piece. You know, they announced this new safety fellowship that's very airy. They announced this new governance plan that's very sort of airy and ethereal, um, but are meant to, I think, you know, occupy space in the conversation on the same topics . And look, I mean everyone, Ronan, you should say more about this, but everyone, including Altman and the OpenAI execs we spoke to recognizes the economic pressures here. I mean, I think you guys were there when he said, Oh yeah, it's definitely a bubble and someone's gonna lose a phenomenal amount of money, right? Yeah. So even putting sort of the sci-fi skynet stuff aside, you know, the economic pressures are unavoidable. And a lot of it has to do with this sort of pitch man rhetoric, the exact thing we're talking about, right? Because these things are contingent. It's not like, oh, will it be a bubble or not? It's like how hyped up will the cycle get is a byproduct of how people like Sam go around the world talking about it. Yeah. Um I want to ask sort of a basic question that I think people have probably raised with you, which is like, why does it matter who Sam Altman is? If what we are talking about is a technology that could have profound implications on national security, the economy, potentially the future of humanity , it doesn't seem obvious to a lot of people why it matters who is running these companies. Because a very nice person who is very honest and very transparent in all their dealings could still release a rogue superintelligence that blows up the world. And a very uh you know manipulative person could release a very aligned model. And so what we should be paying attention to are the models themselves, not the people running the companies that make the models. I'm not saying I believe that, but I'm I'm curious what do you make of that argument that we are focusing too much on the humans and not enough on the technology? We probably both have thoughts on this. I I think I have two . The first of which is it's worth noting that while reasonable minds could perhaps differ on the question you just posed, the answer provided by Sam Altman and the founders of OpenAI was very clear, which is actually part of the way the entire enterprise was structured when it was founded as a nonprofit, was they talked a lot about avoiding an AGI dictatorship. They really believed that actually the person who gets there first and has the most power over this technology is pivotal. The individual integrity is formative to the way the technology goes and the way it's controlled and the way it's used. The other thought that I have is in my mind, you raise a valid point, and more significant than any of this is the structures around these individuals. We have a technology emerging that could really affect us all in all of the existential ways you just mentioned. And we don't have the regulatory guardrails to keep an eye on these folks. We are completely ceding the power to these individual companies and their whims, the mudfight between them, the quality control that each of them has or lacks, I think that to me is the big question. And the integrity of an individual figures in that and it's important, but it reveals the weaknesses in the system. If you have someone who potentially lies all the time, uh, could in the eyes of many critics be a danger. The important thing is to have the structures that account for that. You you there's a great quote um that uh that you guys have in the piece from uh one of his former coworkers who talk about how Sam now has this track record of setting up these elaborate guardrails to keep him in check and then skillfully navigating around them. And it made me wonder if you had seen this piece in the information this week about tensions uh that are being reported between Sam and his chief financial officer, Sarah Fryer. Uh she's reportedly expressed doubts that OpenAI will be ready for an IPO this year. And according to the story, Sam has noticeably and awkwardly excluded her from some conversations related to the company's financial plans, kept her out of some key meetings. I read that and I was like, well, this is exactly You sort of bring in uh somebody whose job it is to look over the finances of the entire company, get it ready for an IPO, but then for whatever reason, mm-hmm we're gonna sort of exclude her from some meeting. So So anyway, I just sort of feel like we really are seeing the exact pattern that you guys are writing about now repeating in real time. Yeah, and I mean just to agree with all of this. I think the thing that Kevin's bringing up about given the power of this, why are we focusing on one personality? Like I think that's very legit. I think that this is way beyond one person, this is way beyond one personality. It's not like the point of the piece is Sam shouldn't be AGI dictators, so Elon should, or Demis should, or whatever, right? It's to point out the fact that we're having a discussion about AGI dictators at all is insane. These guys know it's insane, and yet this seems to be the race that they see themselves being in. Aaron Powell When he was fired, uh, he was brought back in part because I think no one could really imagine an open AI without Sam Altman. Do you think that's still the case? I don't think it's unimaginable anymore. I I I think that part of reaching the scale that they've reached is that you can have a Steve Jobs figure be replaced by a Tim Cook figure, right? It seems like it's inseparable from reaching this scale that that becomes at least a possibility in people's minds. Right, Ron? I mean, does that strike you that way? Absolutely. I I think the landscape has changed substantially over the period of time we were reporting this story. The fact that gradually more and more people were talking openly about this critique is very telling. We report in the piece that there are periodic spasms of senior executives at OpenAI talking about succession again. Of course, naturally the company denies this, but also very interesting that in recent uh forms of that discussion. There has been talk about Fiji Simo being sort of the first potential successor candidate who could slot into uh any ideas of that type that circulate between our asking about that and the piece coming out. Obviously, uh SEMO has now gone on leave for medical reasons. There's a lot of reshuffling. We see it in the Sarah Fryer case. I think you're right to link it to that quote that's in the article about constraints being sidelined. Um and yet I think these doubts and questions persist and are now much more out in the open. Yeah, on the leadership question, it just strikes me that like for somebody, you know, who I assume wants to stay CEO for a long time, it's interesting to me that he's hired so many former public company CEOs to be his top lieutenants, right? It's like he has the former CEO of Instacart there. He has the former CEO of Nextdoor there. Uh he has the former CEO of Slack there. So um, you know, that's you're you're you're bringing a lot of uh really sort of sharp and pointy elbows into the room when you do something like that. I'm trying to tell Sam that there's danger here. Pro tip. If you're listening, Sam. You know, uh there are people in this piece talking about earlier tracts of Sam Altman's career where they feel he was deliberately avoiding that. Actually, part of what underpinned the terrible, terrible fumbling of the firing effort was a feeling that Sam had kind of stacked the board with, as one former member put it, uh JV people. You know, certainly if we're being more charitable than that, people who were unprepared for the ruthless corporate warfare that ensued. And you know, I think one thing that has accompanied the emergence of of this as a a more openly discussed critique is that there's more people around this company, more stakeholders wanting uh you know profession alizing influences in the mix. I have to ask about one detail that I loved of the piece, which is that the first time that Sam Altman and Dario Amade were scheduled to meet, um, they were gonna meet an at Indian restaurant for dinner. This was back in I guess 2015. And Sam texted him and said that his Uber had gotten in a crash and he was gonna be ten minutes late to dinner. Now you did not editori alize on that piece, but knowing you both, I'm sure that you went back through the the Uber FOIA requests and found the the logs of Sam Altman's Uber ride that night. Is it your belief that Sam Altman's Uber actually got in a crash? I think we're just gonna leave that as non-editorialized and let it stand right there by itself. I I mean I I we also uh I will say uh had this conversation and really liked just presenting that uninflected for consideration. Okay, if you are the Uber driver who was driving Sam Altman to dinner with Dario Amade and you are listening to this show. We do wanna hear from you. We do wanna hear your side. Hard fork at NY Times.com . We will get to the bottom of this. We will. Well, it's a it's a great piece. People should go read it. Um please do not investigate any other AI companies before my book comes out. Uh it was it was a very stressful week for me. Yeah, why don't you guys take a nice long spring summer break before you get back to the bigger? Yeah, look into some politicians or Hollywood executives or something. We'll send you some names. Luckily it takes us as long to write a piece as it takes you to write. So I think Exactly . It should be faster. Totally. Ron and Andrew, thanks so much for coming. Thanks, guys. Thanks, guys. Your hats are in the mail when we come back what our Spanish language friends would call una cosa buena. Did you just googled that? No. You clotted it? Yes, okay , okay . Well, Casey, it's been a pretty heavy show today. Yeah. So we thought we wanted to end on a positive note with our segment called One Good Thing . One good thing, of course, our segment where we each talk about one thing that's been tickling our fancy lately. Kevin, why don't you go first this time? Okay. Casey . I am in love with this space mission. Yes. The NASA Artemis II mission. I have been totally and earnestly obsessed. My wife was like, You're sure are talking about this space mission a lot. I have been glued to this thing. And I have been filled with a childlike glee and wonder that I did not know I still had the capacity to feel. Now, what exactly are they doing on this mission? Orbiting the moon. They are going further than any humans have gone from Earth before 2 52, 7 56 miles from Earth. And if you're wondering how many miles is that, well, the New York Times had a helpful comparison list. And what did they find? You would need a chain of two point three seven billion of Nathan's famous hot dogs to cover the distance that this spacecraft has gone from Earth. That's great. Something we can all easily visualize. Thank you for that comparison. Casey, I am learning things that I never expected to learn. I've been watching this with my kid. I have become completely obsessed with like concepts and terms that I did not know a week ago, including corona structure, the termin ator line, which I know you're wondering, that sounds scary. Yeah. It's actually the line that separates the sunlit side of the moon from the side that is dark. Oh. I also learned that we don't call it the dark side of the moon. That's not the preferred astronomical term. The far side of the moon. The far side of the moon. I am obsessed with all of these astronauts. There are four of them up there. Victor, Christina, Jeremy, Reed , this is my Mount Rushmore. I love these people who I've never met. They are adorable. They are incredibly brave. And I think we should go to the moon every single year. I think we should give NASA whatever budget it needs to do because this has reignited my faith in humanity. Absolutely. You know, I also saw somebody on social media was posting that because the mission specialist Christina Coke had communicated with Houston 's uh Jenny Gibbons during the mission. This mission actually passed the Bechtel test, which you don't often see on these missions. Uh so I thought that was cool. I also somebody pointed out, they said, you know, the the the coolest thing about going on one of these missions, Kevin, would be leaving Florida at 5,000 miles an hour. So that resonated with me as well. Okay, you're more interested in the jokes. I am filled with childlike wonder over here. And I just think this is the coolest thing imaginable. It is very cool. You know, recently I had an opportunity to go stargazing. I'm not sure if you've been stargazing recently. Was up on uh Mauna Kea on the uh the island of Hawaii. And uh and we had a really cool telescope there with our guide and I got to stare at the face of the moon and it it inspired a childlike sense of wonder in me as well. But it did not make me want to go there because it looked quite bleak actually. You wouldn't go to the moon? No, there's no Wi-Fi. Okay, Casey, what is your one good thing this week? Today, Kevin, I want to talk about the only thing that can compete with the moon when it comes to inspiring childlike wonder in a person, and that is a weather app. Okay, I'm listening. So recently I was reading about these uh entrepreneurs, uh Adam Grossman, Josh Reyes, and Dan uh Brut in, and they are the team behind Acme Weather, which you probably have not heard of yet, but I bet you've heard of Dark Sky. Yes. Dark Sky was by consensus the best weather app on iOS and while it rained during the 2010s, uh and I'm using rained in the sort of uh the non-meteorological sense, it would tell you whenever it rained. And now I am using it in the meteorological sense. Very good app. Yeah, this app was bought by Apple in 2020, which was like kind of a head scratcher. Apple already had a weather app, it was fine, and then Apple sort of integrated some of its forecasts and some of its other features into its weather app and then shut Dark Sky down in twenty twenty two. And this made people really sad because I think a lot of us feel, myself included, like the Apple Weather app has never lived up to what Dark Sky was in a Tay Day. It's like a prediction mark. It's like it's like there's you know, maybe it's gonna rain. Exactly. Well, so these guys get back together and they say, Frick it, we're doing weather apps again. And they make Acme weather. And so you can download this now for iOS. It is apparently coming later to Android. And I know what you're thinking, Kevin, which is what could you possibly build in 2026 in a weather app that could differentiate it from all the other weather apps that are already on the market, right? Yes. Are you wondering this? I am wondering this. Well, let me tell you a few things. Number one, they don't just tell you the weather, they show you a range of possibilities in a line chart. So most of the time, it'll be like, yeah, it's gonna be 63 degrees and This is the weather app for rationalists and other believers in Bayesian statistics. Exactly. Some of the other things that this app does, uh they will send you a push notification if they think there's gonna be lightning in your neighborhood. Okay. They will also do that when they think a sunset is going to be beautiful wherever you happen to be. Wow. They'll send you an umbrella reminder if it's gonna precipitate in the next twelve hours, and they'll send you a sunscreen alert when the UV index is high. But I've saving my last two favorites for the end. Number one, they will send you an alert when the Aurora Borealis may be visible where you are. That's beautiful. I haven't gotten that notification yet, but I I wake up every day hoping I'm gonna get my Aurora Borealis notification. You gotta go to Scandinavia, I think. Number two, and this is just in time for Pride. They will tell you when there is a rainbow in your neighborhood. Wow. Are you kidding me? This is such a good idea for a weather app. Who does not want to be sitting at your wage slave job? You haven't been outside in like seven and a half hours. And then acne weather tells you, hey, guess what? There's a rainbow in your neighborhood. You're gonna book it outdoors and you are going to behold the majesty of creation heaven. Possibly collecting that data. Well, interestingly, they're taking this ways-like approach where they're inviting their community to submit reports. And so if a bunch of people say, Hey, rainbow in my neighborhood, they're gonna go out and send out a notification. Wow. So now look, this app does cost $25 a year. And I know, you know, probably most people out there are perfectly content with the free weather app on their phone. That is fine for you. But as somebody who loves cool things, new ideas, people having fun, I just wanted to shout out Acme Weather because I think it's a really cool thing. Now what is the likelihood that this app will be purchased by Apple and then shut down? I mean, if that happens, I hope these guys get paid again because somebody has to move the weather app industry forward and these are the folks who are doing it. I love that. Like grandpa, how did you make your fortune? Well I built seventeen weather apps that were identical and then sold them all to Apple. I just also think it's inspiring that at time when some companies are like, we're going to make a system that is going to force the world to rewrite all software, there are other guys who are like, what if there's a rainbow in my neighborhood? I want to find out about that. And those are the people that I want to highlight on today's show, Kevin. Okay. Well download Acme Weather Before the Heat Death of the Universe renders weather irr elevant. And tell us whether you liked it. That was a good thing. Thank you. Thank you for alerting me to this wonderful rainbow detector. Well, thank you for alerting me to the existence of the moon. I know you weren't a big believer in the moon before, but hopefully I've convinced you today. Well, somebody told me something about a sound stage and you know, maybe the land ing was faked, so I've just been curious. I think, you know, we're the only podcasters who actually believe in the moon. That's our competitive advantage. Hard fork, where we believe that people have been to the moon. Before we go, we are saying goodbye this week to our wonderful executive producer, Jen Poyant. Jen has been with the show for years, uh, since almost the very beginning, and she's been a critical force in helping us make the show and conceive the show. Um, so Jen is leaving the New York Times for a new adventure, but we wanted to just give her a special shout out and say thank you from the entire hard fork team for all of the amazing work you've done. It's true. Jen has been a friend and mentor to us both, and we will miss her terribly, but she will always be part of the Hard Fork family. Which means she has to bring a dish to the potluck.

This excerpt was generated by Pod-telligence

Listen to Hard Fork in Podtastic

Podcast Listening Magic

All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.