DE

Decoder with Nilay Patel

The Verge

Future outlook for AI profitability

From The AI industry's existential race for profitsApr 9, 2026

Excerpt from Decoder with Nilay Patel

The AI industry's existential race for profitsApr 9, 2026 — starts at 0:00

Dell PCs with Intel inside are built for every moment. With long-lasting battery life and built-in intelligence, you can stay focused on what matters most. Dell Technologies. Built for you. Dell.com slash Dell PCs . Hello and welcome to Decoder. I'm Neil Ipatel, editor-in-chief of the Verge, and Decoder is my show about big ideas and other problems. Today, let's talk about the looming AI monetization cliff and whether some of the biggest companies in the space can become real profitable businesses before they careen right off it. My guest today is Hayden Field, who's our senior AI reporter here at The Verge, and she's been keeping close tabs on both Anthropic and OpenAI and, how those two companies in particular tell us a whole lot about the AI industry as a whole in 2026. You've certainly heard a version of the monetization cliff story before. Anthropic, OpenAI, and all the other big AI startups are built off the back of hundreds of billions in capital investment. And they're linked to even greater amounts of forward-looking investment in data center build-out, chips, and other infrastructure spend. At some point, that return has to pay off , the profits have to materialize, or the bubble pops. Maybe AGI arrives, maybe the economy crashes, who knows. If you've been listening to the coder, you've heard me talk about this with tons of CEOs right here on the show, and a majority of them have hinted towards the bubble popping. They think some companies will fail in spectacular fashion and others will succeed, but that the opportunities, and especially the money, are simply too big to ignore. The AI industry is going to do this, whether we want it to or not. The market depends on it. And so these last few weeks have felt like a very important inflection point, as both enthropic and open AI have started to react to the reality of needing to go public to make money. The catalyst for all this change is the rise of a Products like Cloud Code and Cowork, the open source OpenClaw, and OpenAI's Codex. They've all radically changed how these companies are thinking about their resources. And that's starting to affect how they behave. The products they support or suddenly kill, the restrictions they impose on customers, and the money they're willing to burn on the way towards the next big milestone. That's because agents are valuable to customers right now, but agents also use far more compute. And so the way people are using agents is burning tokens at a rate way faster than these companies anticipated. And that's causing them to make hard decisions. We saw this most evidently last month when OpenAI abruptly killed its video generation app Sora, ditching a $1 billion Disney deal in the process. Why? Well, it costs too much to run, and OpenAI needs the compute for Codex. And we saw it again just last week when Enthropic decided it would no longer let Claude users burn through compute resources using the OpenClaw Agent framework through a standard subscription plan, instead forcing those users onto a pay-as-you-go plan, which costs substantially more. As you'll hear Hayden explain, these are glimmers of a make-or-break moment for the AI industry. As both Enthropic and OpenAI barrel towards two of the biggest IPOs in history. And the pressure on these companies to make money has never been this intense. The projection these companies have made, which just this week leaked the Wall Street Journal, tell a story of mind-boggling growth, to the tune of hundreds of billions in revenue and profitability by the end of the decade. But the most important questions now are, can these companies pull all this off? And what compromises will they make to reach that goal and avoid crashing and burning. Before we start, a quick reminder that you can listen to this episode or any episode of Decoder completely ad-free by subscribing to The Verged. Just go to the Verge.com slash subscribe. Okay, Verge Senior Policy Reporter Hayden Field and the AI monetization cliff in the race to profitability. Here we go . Hayden Field, your senior AI reporter here at The Verge. Welcome back to Decoder. Thanks. It's great to be here. I'm excited to talk to you. There's a lot going on in AI world recently. It feels like we are at a very important inflection point for this industry. What what are you thinking right now? Yeah, we absolutely are. It's kinda like time to pay the piper in a way. You know, they've been raising a ton of money, raising a ton of hype for years, and now um, you know, as companies prepare to go public and the competition is heating up more than ever and they're kinda entering all these different sectors and trying different side quests, it's finally time to really like face the music and see how much money they can really make and there's never been more pressure on them also. When you say them, I I want to stay focused on open AI and Anthropic, which seem to be on different trajectories. Now there are obviously other big AI companies in the mix. Google exists, it's gonna do whatever Google does, but Google has a big business already. It can subsidize finding product market fit with AI, it can subsidize making efficiency improvements on TPUs. It's just very different from in particular OpenAI and Anthropic, which have to become companies. Like they have to graduate into becoming companies, particularly if they're gonna go public and then they're gonna have shareholders and they're gonna have to show profit and loss and all this other stuff. Can you just describe how OpenAI and Anthropic are currently situated and where they might be going. Sure. So yeah, I mean it's interesting because in some ways they're in the same position. They're both preparing to go public this year reportedly and kind of racing each other to do that. And they're both constantly raising a ton of money and hype. But where they differ is of course , OpenAI has traditionally been really courting the consumer-facing stuff and some enterprise and government, but they've really been focused equally, if not way more so, on consumer. Anthropic has always been pretty focused on enterprise and they have remained pretty steady on that focus. So, you know, they're not really doing as many side quests. They're not um, you know, rolling out as many other experiments or projects. They're just kind of laser focused on their enterprise um goals and their enterprise clients. Now are they only doing that? No. Sometimes they kind of it seems like get FOMO and they're like going into, you know, Claude for healthcare and you know, Claude for education, things like that. But it doesn't totally remove them from their goals of enterprise because like you know, that's pretty focused on healthcare organizations or you know, education systems are enterprises too, as we know. So, you know, they're really laser focused on this. They kind of have the reputation of being the adult in the room in some ways because they're not as perceived as like going wherever the wind blows them. They're kind of on one trajectory and they're staying really steady it seems like whereas opening eye kind of has the reputation of, you know, changing their focus a bunch, internally and externally. People have said this. It's like, you know, going on a ton of side quests, uh, you know, trying things, you know, throwing a ton of spaghetti at the wall, like consumer, enterprise, government , uh, everything, just seeing what works. And even Sam Ullman himself has described OpenAI as kind of like betting on a ton of startups internally and just kind of seeing which one pulls ahead. But now they're finally having to realize , hey, maybe it's time to focus on the most money making endeavors here and deprioritize some of these other projects, kill them off so we can just kind of compete with anthropic and focus on coding and enterprise. Yeah. I think that brings me to the news of this week that really made me feel like, oh, we're at an inflection point. And that is Enthropic started raising its rates for people using tools like OpenClaw. They really want you in their system on their subscription plans using the tools and their pricing their way. And if you want to use Claude to power other systems like OpenClaw, you're going to have to pay in a different rate structure that to me feels like they don't want you to do it at all. And then next to that, OpenAI killed Sora, which was their very buzzy video generation product that was basically deep fake nightmare, but they also had a deal with Disney for a billion dollars, which always seemed confusing, but they canceled that deal too. Let's start with OpenAI. You're saying they're killing off all these side projects. They're trying to focus on codecs, which is fundamentally enterprise software, right? It is soft it is a tool for software developers to make software. Why did they kill Sora and where did the sense of focus come from? I think the sense of focus honestly just comes from the competition and the fact that pressure is building on them to generate more revenue than ever. They've never had more eyes on them in their balance sheet in their their whole company history because they're preparing to go public and because they had just raised so many billions of dollars. Their post money valuation right now is $852 billion . So So yeah, I mean investors are saying, okay, like what's the plan here? What's the plan for returning our money? So in order to deliver on those promises, they are having to not only devote their time and like money and staff to the projects that are going to make the most money, but also their compute. So that's something that we saw execs at OpenAI talk about um when they killed Sora . We saw a couple internal memos go out. One of them was from Fiji Simo, the um CEO of AGI deployment. And she said that basically the company needed to stop focusing on side quests and just really dive fully into enterprise encoding. And yeah, I mean compute is super limited. OpenAI is always, always talking about how they don't have enough compute to fulfill what they want to do or to scale appropriately. Sam Altman at Dev Day and um SF in October said to reporters, I've just never seen him so stressed when he was talking about this about how the compute constraints were stopping them from scaling appropriately and how they couldn't really deliver what clients wanted unless they just could somehow get their hands on more compute. I've never seen them more stressed out. And so yeah, that's, you know, playing out now months later, Sora took up a bunch of compute and there wasn't really a a big return there. And so they abruptly decided to cancel it apparently 30 minutes after working with Disney on a related project and then just suddenly pulled the plug with no notice. So you know things over there it seem like are in kind of a tailspin. It's like if you're pulling the plug on a project with a huge company like Disney , 30 minutes after talking to them about like how it was going great, there's some some real uh some real issues there. I want to come back to open AI, its management , which has all but turned over since the last time you were on decoder. And it's sort of strategic focus. But what really strikes me about the need for compute is I don't know, when you were on the show a year ago or a year and a half ago, all of that compute was pointed at training, right? We got to make bigger models. They're going to be more capable. GPT 95 will come out and it'll be digital Jesus or like whatever it was. And the idea was that we needed bigger models with more data and the compute, the scale of compute necessary was gonna get us to the capable models in AGI in some way. Now it seems like the compute is all for inferen ce, right? There's people that want to do things with these tools, particularly in software development. And if we don't scale up the compute to meet the demand, we'll get left out in a cold because our big rival is sitting right there waiting to scoop up all those customers. Has anyone pointed that shift out explicitly that we've gone from all of the focus on training and capability in the model to the models are pretty good, everyone wants to use them. We need massive amounts of compute for inference? So in the numbers that leaked from investors in OpenAI and Anthropic this week, you know, there are a ton of crazy charts showing just how much of their money and profit is going or lack of profit is going towards um inference training. The way they break out these categories is just really interesting. It's not always apples to apples, but it's like anthropic is going to spend one third to one quarter on model training compared to open AI. But open AI's revenue is expected to hit like 275 billion in 2030. But Anthropic says its revenue will hit 1 50 billion in 2029. So it's kind of like anthropic seems to be spending a lot less on the same categories as open AI and growing slower. But open AI is spending a ton and then just hoping that's going to lead to a return on investment, which kind of tracks with the way the companies have been operating for the past couple of years. Yeah. Anthropic seems to be much slower and more focused and constantly worried that it's going to kill everyone in the world with every successive model. Opening eye is just pedal the metal all the time. Yeah, but anthropic isn't worried enough that it'll stop. Because they just took out their frontier safety pledge and said, Oh, actually, we're gonna stay competitive even if we think it's a little dangerous. Sorry. The the race turn IPO makes a lot of people rethink a lot of their values apparently. We need to take a quick break. We'll be right back. Dell PCs with Intel inside are built for the moments you plan and for the ones you don't. For those early morning news sessions, or when you're on the go and leave your charger at home, or even the times you need the best processing speed to just get the job done. Dell built tech that adapts to you. Built with a long-lasting battery so you're not scrambling for an out let. And with built-in intelligence that makes updates around your schedule, not in the middle of it. Find technology built for the way you work at Dell.co.uk forward slash DellPcs. Built for you . We're back with the Virgis Senior AI reporter Hayden Fields talking about Enthropic and AI's race to profitability. The other piece of news, right? This is again pricing compute usage. Pricing structure to make using Claude with OpenClaw much more expensive in in different ways. So if you have a Claude Pro or Mac subscription, you can't just hook it up to OpenClaw and go, you've got to buy tokens on top of that subscription. That in some ways felt inevitable. In other ways, obviously made a lot of people very upset. It made the developers of OpenClaw upset. They said they could only delay the decision by a week. What's going on there? So when I talked to an economist about this this morning, he said that agents have just changed everything. And I talked to another couple of tech leaders about this, and they said agents are consuming hundreds of thousands more tokens than basic chat models have been. So it it does make sense , even if it's frustrating, because it seems like, you know, the way Anthropic put it was, hey, you know, our infrastructure isn't built for this, like we didn't plan for this. And yeah, I mean, if you're using open claw with Claude, obviously, like, you know, you basically have an agent on your behalf like prompting Claude for you and delivering those prompts back to Claude and saying, no, do this, no, do this. So it's like way more than a human would be able to do and anthropic's point was hey like we only really built this for humans to be able to prompt Claude right now unless it's for our own products obviously it's a money grab like of course they don't want a third party tool um doing the same stuff that they want their own. Right. Uh they want people using cowork. Yeah, of course. So it's like basically they just want to keep it like a walled garden. They want it to be a moat. That's kind of the only advantage that um AI companies have right now is just trying to keep their users engaged and keep them on their own platform, build a moat of some sort because otherwise like, you know, everyone loves to switch between whatever model is the best that day, that week. So yeah, this is just anthropic looking to like deepen its mo at. And also , you know, it compute is constrained everywhere, so they don't really want a a third party tool prompting Claude way, way, way, way, way more than a human would be able to. They don't seem to mind when it's cowork doing the prompting. Right. Because you're inside of Claude and then they can monetize you in whatever ways that they come up with to monetize you. Exactly. They want to monetize it if they can. Is there a path either in enterprise philanthropic or consumer for open AI to actually make a dollar in profit? That seems to be the big question everybody has. Yeah, that's a really good question. And it's something that um when I was checking with economists, they didn't feel that it would be that easy or likely, but they think that one or two LLM providers will come out on top and the rest will have to consolidate. So yeah, there's a chance for sure. Anthropic and OpenAI are both projecting some form of profitability in 2029, 2030. Anthropic said maybe this year it'll also like be slightly in the green and then go back in the red and then go back in the green for 202 8 , 2029. But yeah, I mean I think that they've all realized that if that is ever gonna happen, it's going to be via the bo boring, unglamorous back of office stuff, enterprise, military contracts, government contracts, all that stuff. Because consumers just honestly, yeah, they'll maybe pay for a $200 a month subscription if they're a power user, but there is no way that stuff is ever going to add up to the amount of money that's involved in these enterprise or government contracts. This seems to me like the point at which OpenAI faces just a fork in the road. They were made to be a consumer business, as you've pointed out. It seems very much like they wanted to bite off some of Google's business. Google search, one of the greatest successes in business history, maybe the greatest business that you can run in world history. They haven't really succeeded, right? They might have shifted some search behavior to chat GPT, but they haven't taken meaningful dollars away from Google. Google just keeps doing better and better, and they keep lacing Gemini into everything that they make. And eventually, the idea that you would open ChatGPT to do a search when Google's gonna deliver you something substantially the same. That's gonna get harder and harder for OpenAI to compete on. Are they just pivoting away from consumer entirely? They hired all those meta people to do ads and the ads have come Yeah, it's still early days for that, but I mean it is it's it's funny. I don't I'm really interested to watch the ad stuff play out, but no, they're not pivoting from consumer. It just looks like they're trying to really front load their resources, their compute, and their staff into the enterprise encoding effort. So consumer, it's already built, it's not that crazy to keep it main tained and just like keep rolling stuff out. But it seems like most of their efforts are still gonna go towards catching up and closing the gap and enterprise encoding, especially because reputationally they also have a gap to close there. Anthropic, for better or worse, has the reputation of being pretty trustworthy, pretty brand safe. That's what a lot of um startups were telling me when I talked to them about this a few months ago. They all were afraid of like the risk associated with especially XAI and somewhat open AI as well, anthropic. They felt pretty safe using it. They didn't feel like they'd be on the hook for reputational risk. So yeah, open AI has to close the gap in terms of like actual usage, anecdotally, what people prefer, the hype that's involved with Claude Code and all that that brings, and then also the reputational stuff that Anthropic kind of has going for it just because of its steady, slow growth. That steady state for anthropic is kind of reflected across the company, not just in product development or strategy, but in terms of employee retention or the ability to attract people from across the AI industry. A lot of people just head towards anthropic. I would contrast that to OpenAI, which you're pointing out has a different reputation. And then just in the last week or two weeks, feels like it's turned over its entire executive team. Fiji SEMO, who you mentioned is CEO of AGI deployment. Her title, like a minute ago, was CEO of applications. And they switched to the AGI deployment, which I don't understand at all. And now she's out on medical leave. I wish her well. They have other executives who are out and leave. Their head of marketing just left. Right before all those people left, they bought a podcast called TVPN . I what's going on over there? Is this like a stable company right now? It is uh it is pretty crazy right now. I think they're going through a huge strategic shift. And it's just it's a question of whether this is gonna be like every other shift we've seen in the past with them kinda going all in on one thing, going all in on another thing, going all in in parallel on five things. Um, is this really gonna last? That's kind of what people are asking. And we don't know. I mean, I think if they are entirely focused on coding and enterprise, yeah, it makes sense that there'd be a lot of upheaval. But they're also like, you know, really into building this super app. And um Greg Brockman just took over uh charge of that while um CEMO is out. And then uh on the business side, their CSO, their CFO, and their CRO are gonna take charge. And then their CMO just stepped down due to health reasons, their um head of communications stepped down in January and there's still no uh replacement there. And that I think is part of why they bought TBPN. There's been a lot of bad press about open AI in the last few months. They've had a lot of public controversy and a lot of just drama playing out and that does not and that is not good for their quest to you know have a reputation as a company that enterprises and the government can trust. And so I think that's part of why they bought TVPN. They said that literally they wanted to help shape the narrative AK, help control the public narrative playing out about AI. And so, you know, what better way than to hire the people that are being watched three hours every weekday who are talking about it. Plus, TVPN is now gonna help um like with OpenAI's comms and marketing um in their free time. So yeah, I mean it's really an aqua hire situation. Yeah. I have a lot of thoughts on that. I I think a lot of the other AI companies who put their executives on T VPN certainly have a lot of thoughts about that that I've heard. We'll set that aside . The idea that it's just a marketing problem or a narrative problem, the last time I heard that was was Uber executives complaining that Uber only got bad press. And most of the media that I knew at the time was like, no, you just keep doing stuff. Yep. Yep. It it the it's not the narrative, it's literally what you are doing all the time that is getting you the bad press. You can't just yell at us into liking you. It's not quite the same with OpenAI. They are darlings in a very specific way. But the bad press or the perceived bad narrative, in my opinion, has everything to do with their strategic confusion. Right. Right? The the media is not out to get AI. It's when you do the polls, the broad public in the United States is like AI is less popular than ICE. Like there's a gap in how people feel about these tools, even though they're exposed to them all the time. Is that all on open AI? Is that across AI in general? To me, it feels like it's very much open AI, in particular Sam Altman running around all the time saying, like, I might destroy the world by accident if you don't give me all this power. Yeah, I think it's really interesting to like look at how the general public feels versus how people that are in tech feel because they kind of feel they feel different ways, but there is some overlap, but it's for different reasons. So I remember when I was chatting with um a firm that charts like public perception of things in the last couple months, they said that yeah, the general public really didn't like AI for the most part. They broke it out by generation, by gender, all sorts of different things. But for the most part, yeah, like the general public was not a fan. And they noticed that the more well-known an AI company was, the worse the public perception of them just because they were more aware that it was an AI company. So open AI had a worse reputation by this firm standards than anthropic per se, but as anthropic's like public perception was on the rise, opinion of it was going down. So that was just interesting. It's like if if you're known as a household name for being an AI company, like the general public right now isn't really a fan for the the majority at least. But within tech, people are looking at the business angle and how these companies are conducting themselves, what the CEOs are saying. Maybe Sam Altman isn't a household name for the average person in America. I think a lot of people know who he is, but I will be talking to people in the wild and say Sam Altman and they're like, who's that? So when I say CEO of OpenAI, they get it. But yeah, I think it's interesting, like you know, just comparing the tech reactions to the general public's reactions because in the tech industry, yeah, people are really raising an eyebrow at OpenAI's business strategy and uh Sam Altman going around saying like oh I couldn't raise my child without Chat GPT or oh like you know if you if your job gets replaced by AI like maybe you know you should think about switching jobs. And you know, Dario Amadea has said kind of sus things as well. So I mean, you know, no AI leader is is doing comms entirely right. Uh, but it is interesting to see kind of yeah, the difference between the general public's perception versus um people in tech . We have to take another quick break. We'll be back in just a minute. Hi, I'm Brene Brown. And I'm Adam Grant. And we're here to invite you to the Curiosity Shop. A podcast that's a place for listening, wondering, thinking, feeling, and questioning. It's gonna be fun. We rarely agree. But we almost never disagree, and we're always learning. That's true. You can subscribe to the Curiosity Shop on YouTube or follow in your favorite podcast app to automatically receive new episodes every Thursday. Make or break here and what might happen next. This kind of brings us to the make or break moment, right? These companies are both headed towards IPOs. We can see their revenue projections. And it it feels a little bit like tortoise in the hair, right? You've got Anthropic committing to being an enterprise company, being a solution for software development that has kind of changed the entire nature of software development. You have OpenAI with codecs that thinks that it can eat a piece of that market as well, shutting down consumer applications that aren't working. Maybe it will figure out ads. Maybe it will actually bite off a piece of Google search. Who knows? But they're still just hopping all over the place. Like it's just pedal to the metal of open AI all the time and Anthropic just continues moving along its roadmap. How do you think that plays out, not you know, over the course of the IPO timeline, but in the short term? Do you think open AI can recapture a sense of focus? I think I think it's going to try very hard. And I think it will be able to. The question is just can it hold on to that focus? You know, I mean, I've seen them change their strategy just like this in the past, and usually it just falls by the wayside a couple months later or a year later. So what I'm wondering is how long they can hold on to this. Like anthropic has, you know, committed to one thing, stuck with it for the most part . OpenAI, whenever they commit to something, like a year later, things shift, you know, teams will be disbanded. Um, they'll have a reorg. You know, Anthropics also seen a ton of change, but it's always had this one goal and kind of stuck with it. And I've been tracking both these companies for so long that it's like, you know, you can kind of see the trends there. So maybe this is a big step change for open AI and they're really going to pivot. And maybe that's why we're seeing all this executive restructuring and um the side projects being killed and trying to really like force them to commit. But it is interesting because I don't know the rate at which they'll be able to catch up with anthropic when it comes to enterprise encoding because I've even seen anecdotally like a bunch of startup founders switching entirely over to anthropic when they used to be um you know testing both. Um, you know, we did a piece a couple months ago about how Cloud Code was having a moment and most of the founders and company leaders I talked with in a ton of different sectors preferred anthropics products for this stuff. So I mean , to uh OpenAI's credit, it has been working really hard on getting its coding models up to date, more improvement. We've seen people start to prefer that sometimes. So I mean they 've made a huge uh advancement here. It's just can they close the gap? That's what we'll have to wait and see on. Yeah. It really feels like software development is product market fit for these tools And that's a very lucrative market. That is a lot of jobs that might go away or change substantially in some meaningful way. And everything else is like wait and see. Like you, anything that kind of looks like software development might get product market fit, but software development is such a big category, that's in particular what a bunch of these companies are gonna focus on. I'm very curious as the pressure on turning these companies into real businesses goes up as they get closer to an IPO and Anthropic has to monkey with pricing even more to make sure that they're running an actual business and not a token subsidy oper ation. And OpenAI has to cut down on more projects. If that pressure, that pricing pressure, that monetization pressure changes the companies in any meaningful way. You've been thinking about this and even talking to people about this. Do you see glimmers that that that's about to happen? I think so. Just because I mean the pressure is building, you know, when I was talking to economists, there's no way that things can continue the exact way they are now. Um I was chatting with a couple um execs this morning about how the price has kind of been passed on to enterprise clients and how that's shifted. So you know, a lot of um comp any leaders are thinking about moving to open source instead, or at least moving to open source for a lot of their simpler tasks or queries, um, and kind of like building their own eval s uh to see what it makes sense to like pay top dollar for for either anthropic or open ai is like more complex more powerful models which ones they can kind of route to the simpler models and which ones they can just go open source for. So it's interesting that, you know, in order to kind of combat these uh pricing shifts and uh just the amount that they're paying these two labs every month, um, they're starting to just build their own internal infrastructure and tests and evals just being like, okay, let's really budget here. Like, what do we really have to pay top dollar for? And what can we kind of like, you know, skimp on, but it'll have around the same type of answer. So that's what I think is interesting. Like this cottage industry of like charting this stuff out internally and then just like keeping that close to the vest and using that as a guide. Yeah, I mean that's kind of why I was asking about inference at the beginning. If the models today are good enough to be this disruptive to software development. There's no reason that a distilled model a few years from now that's much cheaper to run or you can run locally wouldn't be as good. And that the the bleeding edge model is unnecessary because it's so expensive. And it I don't we haven't like run this industry long enough to know how those pricing dynamics play out, but it feels like the additional capability from the next model, the next model, the next model is unnecessary if the current models are already so disruptive to at least the industry of software development. And I 'm just very curious to see how that pricing plays out because the incentives to keep burning money on training go down if the products are good right now. And I I haven't really seen the labs talk about it, but you can see the bigger companies that are that have to be much more tightly run, like Google, starting to understand, oh, we can deploy a lot of different models for a lot of different uses and lower costs across the board. Right. And it's funny too that, you know, these labs do promise like, oh, you know, as the years go on, prices are gonna drop. Don't worry, we're gonna offer these models and access cheaper and cheaper. But that doesn't really square with what's gonna have to happen for them to turn a profit, especially one that investors won't like roll their eyes at. So that's what's gonna be interesting. There's a lot of tension here that we'll have to track over the next six months or so because, you know, that this is gonna be the big year for uh paying the Piper. Yeah. Well I mean there's only two ways to do it, right? You can increase the number of people paying the amount they're paying now, or you can increase the price. Which one do you think it's going to be? It's funny because that's exactly what an economist was telling me this morning. He's like, well, like you either got to expand to basically the entire gener al public globally, or um you're gonna have to raise the prices a lot and then even then it may not square with like what you actually need. So we'll see. It's gonna be a really interesting year. Yeah. Like I said, it feels like this past week, real inflection point as we saw Enthropic starting to play with pricing in a way that shifted user behavior, I think somewhat meaningfully, and then open AI realizing it needed to get into the more lucrative part of the market and abandon some things that got it a lot of attention, but ultimately had no path towards money. What do you think happens next year. I think we're gonna see open AI further, you know, can kind of consolidate resources into these two focuses and lose a couple of other side projects. So, you know, we've heard maybe Atlas is going to go by the wayside. Probably , even though they haven't said this, some safety research, I would guess, or at least, you know, doing what they tend to do, which is kind of uh reassigning people on a certain safety research team into other departments. And so they say they're not actually, you know, diminishing the research, but who knows whether they are or not. You know, so I think we'll see some changes to its research org and maybe people studying long-term risks. They've got to keep some of them around because it's good PR. But you know, I could see them definitely kind of like on the DL, uh reassigning some of those people. Um and then yeah, I mean , it's all about just devoting more compute to the things that are gonna make the most money so that they can make investors happy. And so, you know, what and whatever ways they can do that, that's what we're gonna see probably in maybe another funding round before they go public. OpenAI is also reportedly trying to go public before anthropic. That's something Sam Altman is apparently really serious about. But he's apparently the information reported uh like kind of uh sparring with it's CFO Sarah Fryer about that. She apparently doesn't think it's ready to go public as quickly. So yeah, we're gonna see maybe some more leadership turnover. Who knows? But you know, we'll be uh the I think that the next couple of months are going to be very interesting for executive turnover, um, which projects get killed off and also like probably some top engineers going from one lab to another or vice versa and w tracking who's moving and to where and what team they moved from is gonna be really telling as well I feel like we could do another full hour of decoder on just how much the AI industry is driven by Dario and Sam hating each other specifically. I don't know if that's today, but you're gonna come back and do that one soon. Hayden, thank you so much for being on Decoder. Thanks so much . I'd like to thank Hayden for taking the time to join the coder and thank you for listening. I hope you enjoyed it. If you'd like to let us know what you thought about this episode or really anything else at all, drop us a line. You can email us at decoder at the verge.com. We really do read all the emails. Or you can hit me up directly on Threads or Blue Sky. We're also on YouTube. You can watch full episodes at Decoder Pod and we have a TikTok and an Instagram. They're also at DecoderPod. They're a lot of fun. If you like Decoder, please share it with your friends and subscribe where we get your podcasts. If you really like the show, hit us with that five-star review. Decoder is a production The Verge and part of the box Media Podcast Network. The show is produced by K Cox Nick Stat. This episode was edited by Xander Adams. Our editorial director is Kevin McShane. The Decoder Music is by Brickmaster Cylinder. We'll see you next time .

This excerpt was generated by Pod-telligence

Listen to Decoder with Nilay Patel in Podtastic

Podcast Listening Magic

All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.