Hard Fork
The New York Times
Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution
About This Episode
Ads are coming to ChatGPT’s free and low-cost subscription tiers. We explain what they’ll look like, why OpenAI is taking this approach and whether the company can court advertising dollars without compromising quality and user trust. Then, Amanda Askell, Anthropic’s in-house philosopher in charge of shaping Claude’s personality, joins us to discuss the company’s newly released “Claude Constitution” and what it takes to teach a chatbot to be good.
As a bonus, if you’re interested in learning how to get started with Claude Code, you can check out our tutorial on YouTube.
Guest:
- Amanda Askell, a member of Anthropic’s technical staff
Additional Reading:
Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
More Episodes
Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good Thing
In this episode of Hard Fork, hosts Kevin Roose and Casey Newton explore the dual nature of recent developments in the artificial intelligence landscape. The episode opens with a deep dive into Anthropic’s new model, Claude Mythos Preview. Unlike standard AI releases, this model is being withheld from the public due to its potential dangers, specifically its ability to identify critical cybersecurity vulnerabilities across major operating systems and web browsers. Anthropic has instead provided access to a consortium of tech companies for defensive testing. The hosts discuss the massive implications for global digital infrastructure and the potential for a forced industry-wide software reset. Following this, the show features New Yorker writers Ronan Farrow and Andrew Marantz, who join the hosts to discuss their extensive investigative profile of OpenAI CEO Sam Altman. The conversation centers on the central question of the piece: whether Altman can be trusted in his role as a leading figure in AI development. Farrow and Marantz detail their forensic approach to the reporting, highlighting how their investigation moves beyond common Silicon Valley tropes to examine the nuanced patterns of Altman’s professional conduct and public-facing persona.
The Future of Addictive Design + Going Deep at DeepMind + HatGPT
In this episode of Hard Fork, the hosts examine the evolving legal landscape surrounding social media and its impact on the internet. A primary focus is the recent string of court rulings where juries found major platforms negligent regarding their design features—such as infinite scrolling, autoplay, and recommendation algorithms—labeling them as defective products that knowingly harm users, particularly children. The discussion highlights how these cases represent a potential shift in how Section 230 is interpreted, moving beyond content liability toward questioning the "mechanical" design of these applications. The hosts explore the parallels between this litigation and the historical legal battles against big tobacco, debating whether it is possible to separate a platform's design from its content. They also touch upon the broader implications of these verdicts, including whether platforms will be forced to roll back engagement-driving features or adopt stricter age-gating policies. Finally, the episode looks toward the future, considering how the rise of AI chatbots might represent the next frontier of this debate as these technologies become increasingly sophisticated and integrated into the platforms that dominate our daily screen time.
The Ezra Klein Show: How Fast Will A.I. Agents Rip Through the Economy?
In this episode, Ezra Klein interviews Jack Clark, a co-founder and head of policy at Anthropic, to explore the rapid evolution of artificial intelligence from conversational chatbots to autonomous agents. The discussion moves beyond the initial hype, focusing on how these new systems function as digital workers capable of completing complex tasks, such as writing sophisticated software or managing large-scale projects, with minimal human intervention. Clark explains that the transition toward agentic AI is fueled by models that possess a form of "intuition" developed through problem-solving environments. The conversation addresses the shift in human roles: as individuals delegate technical tasks like coding or research to AI agents, they are increasingly moving into roles that resemble product managers or editors. While this evolution offers significant productivity gains by offloading mundane labor, Klein and Clark express shared concerns about the long-term impact on human development. They argue that critical skills—and the "good taste" necessary to direct these AI systems effectively—are often cultivated through the very labor that agents now perform, raising important questions about the future of work and education in an AI-driven economy.
‘A.I.-Washing’ Layoffs? + Why L.L.M.s Can’t Write Well + Tokenmaxxing
In this episode of Hard Fork, hosts Kevin Roose and Casey Newton investigate the recent wave of tech layoffs, exploring whether corporations are genuinely being transformed by AI or simply engaging in "AI-washing"—using the trend as a convenient narrative to justify restructuring. While companies like Atlassian, Block, and Meta have cited AI as a driver for staff reductions, the hosts argue that these moves often reflect broader corporate mismanagement, stock market pressure, and a pivot toward massive infrastructure spending rather than simple automation of existing roles. The conversation then shifts to the limitations of modern large language models, featuring guest Jasmine Sun, who discusses her research into why today’s chatbots often struggle with creative writing. Sun explains that while older, "noisier" models like GPT-2 were surprisingly poetic and weird, modern iterations have been constrained by post-training processes and human feedback rubrics that prioritize corporate "helpfulness" over artistic voice. The episode concludes with a broader critique of how tech giants measure success, questioning whether the push for objective, quantifiable benchmarks is stifling the nuanced, creative potential of AI.
A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s Identity
OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop
At the Pentagon, OpenAI is In and Anthropic Is Out
Is A.I. Eating the Labor Market? + The Latest on the Pentagon, OpenClaw and Alpha School
The Pentagon vs. Anthropic + An A.I. Agent Slandered Me + Hot Mess Express
‘Something Big Is Happening’ + A.I. Rocks the Romance Novel Industry + One Good Thing
Related Podcasts
All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.