HA

Hard Fork

The New York Times

The Future of Addictive Design + Going Deep at DeepMind + HatGPT

Apr 3, 20261h 9m
Summary

In this episode of Hard Fork, the hosts examine the evolving legal landscape surrounding social media and its impact on the internet. A primary focus is the recent string of court rulings where juries found major platforms negligent regarding their design features—such as infinite scrolling, autoplay, and recommendation algorithms—labeling them as defective products that knowingly harm users, particularly children. The discussion highlights how these cases represent a potential shift in how Section 230 is interpreted, moving beyond content liability toward questioning the "mechanical" design of these applications. The hosts explore the parallels between this litigation and the historical legal battles against big tobacco, debating whether it is possible to separate a platform's design from its content. They also touch upon the broader implications of these verdicts, including whether platforms will be forced to roll back engagement-driving features or adopt stricter age-gating policies. Finally, the episode looks toward the future, considering how the rise of AI chatbots might represent the next frontier of this debate as these technologies become increasingly sophisticated and integrated into the platforms that dominate our daily screen time.

Updated Apr 3, 2026

About This Episode

Last week, two separate juries held social media companies liable for harming young users. We unpack what these landmark decisions mean — not only for the future of social platforms like Meta and YouTube, but also for A.I. chatbots. Then, Sebastian Mallaby, the author of “The Infinity Machine,” joins us to talk about the three years he spent with Demis Hassabis and those closest to Google DeepMind. And finally, we catch up on some of our favorite tech headlines from the week with a round of HatGPT.

 

Guest:

  • Sebastian Mallaby, author of “The Infinity Machine: Demis Hassabis, DeepMind and the Quest for Superintelligence.”

 

Additional Reading:

 

We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Listen to Hard Fork in Podtastic

Podcast Listening Magic

More Episodes

Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good Thing

Apr 10, 20261h 4mSummary

In this episode of Hard Fork, hosts Kevin Roose and Casey Newton explore the dual nature of recent developments in the artificial intelligence landscape. The episode opens with a deep dive into Anthropic’s new model, Claude Mythos Preview. Unlike standard AI releases, this model is being withheld from the public due to its potential dangers, specifically its ability to identify critical cybersecurity vulnerabilities across major operating systems and web browsers. Anthropic has instead provided access to a consortium of tech companies for defensive testing. The hosts discuss the massive implications for global digital infrastructure and the potential for a forced industry-wide software reset. Following this, the show features New Yorker writers Ronan Farrow and Andrew Marantz, who join the hosts to discuss their extensive investigative profile of OpenAI CEO Sam Altman. The conversation centers on the central question of the piece: whether Altman can be trusted in his role as a leading figure in AI development. Farrow and Marantz detail their forensic approach to the reporting, highlighting how their investigation moves beyond common Silicon Valley tropes to examine the nuanced patterns of Altman’s professional conduct and public-facing persona.

The Ezra Klein Show: How Fast Will A.I. Agents Rip Through the Economy?

Mar 27, 20261h 40mSummary

In this episode, Ezra Klein interviews Jack Clark, a co-founder and head of policy at Anthropic, to explore the rapid evolution of artificial intelligence from conversational chatbots to autonomous agents. The discussion moves beyond the initial hype, focusing on how these new systems function as digital workers capable of completing complex tasks, such as writing sophisticated software or managing large-scale projects, with minimal human intervention. Clark explains that the transition toward agentic AI is fueled by models that possess a form of "intuition" developed through problem-solving environments. The conversation addresses the shift in human roles: as individuals delegate technical tasks like coding or research to AI agents, they are increasingly moving into roles that resemble product managers or editors. While this evolution offers significant productivity gains by offloading mundane labor, Klein and Clark express shared concerns about the long-term impact on human development. They argue that critical skills—and the "good taste" necessary to direct these AI systems effectively—are often cultivated through the very labor that agents now perform, raising important questions about the future of work and education in an AI-driven economy.

‘A.I.-Washing’ Layoffs? + Why L.L.M.s Can’t Write Well + Tokenmaxxing

Mar 20, 20261h 0mSummary

In this episode of Hard Fork, hosts Kevin Roose and Casey Newton investigate the recent wave of tech layoffs, exploring whether corporations are genuinely being transformed by AI or simply engaging in "AI-washing"—using the trend as a convenient narrative to justify restructuring. While companies like Atlassian, Block, and Meta have cited AI as a driver for staff reductions, the hosts argue that these moves often reflect broader corporate mismanagement, stock market pressure, and a pivot toward massive infrastructure spending rather than simple automation of existing roles. The conversation then shifts to the limitations of modern large language models, featuring guest Jasmine Sun, who discusses her research into why today’s chatbots often struggle with creative writing. Sun explains that while older, "noisier" models like GPT-2 were surprisingly poetic and weird, modern iterations have been constrained by post-training processes and human feedback rubrics that prioritize corporate "helpfulness" over artistic voice. The episode concludes with a broader critique of how tech giants measure success, questioning whether the push for objective, quantifiable benchmarks is stifling the nuanced, creative potential of AI.

A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s Identity

Mar 13, 20261h 6m

OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop

Mar 6, 20261h 5m

At the Pentagon, OpenAI is In and Anthropic Is Out

Mar 1, 202633 min

Is A.I. Eating the Labor Market? + The Latest on the Pentagon, OpenClaw and Alpha School

Feb 27, 20261h 0m

The Pentagon vs. Anthropic + An A.I. Agent Slandered Me + Hot Mess Express

Feb 20, 20261h 4m

‘Something Big Is Happening’ + A.I. Rocks the Romance Novel Industry + One Good Thing

Feb 13, 20261h 0m

Elon Musk’s Mega-Merger + We Test Google’s Project Genie + What’s Next for Moltbook Creator

Feb 6, 20261h 4m

All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.