ON
On with Kara Swisher
Vox Media
Finding a Path to Pro-Human Futures
From Why the AI Race Is Leaving Humans Behind with Tristan Harris — Mar 26, 2026
Why the AI Race Is Leaving Humans Behind with Tristan Harris — Mar 26, 2026 — starts at 0:00
Let's assume we don't want to be doing this interview in five years from a bunker. Let's let's avoid that, Karen. Let's avoid that . Hi everyone from New York Magazine and the Vox Media Podcast Network. This is On with Kara Swisher and I'm Kara Swisher. My guest today is Tristan Harris, a technology ethicist and co-founder of the Center for Humane Technology. He's a former entrepreneur and Google employee who now studies how the tech industry's platforms have become extractive and controlling. He was featured in the 2020 Netflix documentary, The Social Dilemma, which showed how social media has manipulated our psychology and behavior through addictive algorithms. Now he's in a new film from director Daniel Rohr called The AI Doc or How I Became an Apocaloptimist, I think I got that right, which explores the promises and existential threats of AI, topics Tristan has written and spoken about extensively. When he was last on in May of 2023, we talked about why he felt the AI arms race needed to slow down. Three years later, that hasn't happened, and AI has become integrated into nearly every aspect of society. I have been talking to Tristan for many, many years. We did an original interview back in 2017. I think I was one of the first people to focus on what he was saying because he had come out of the tech industry and he had such insights into the sort of casino mentality that was inside these companies in terms of keeping people's attention and not letting it go. And he was spot on, even though people were not paying attention to him or they dismissed him as someone who wasn't successful at tech and various insults that they did, but he was spot on right. And I find him to be uh very smart. And in fact, he was one of the first people to do a session for people in Congress about AI long ago. Again, where a lot of people were decrying what he was saying, and he was 100% right. So when something's right so much, you tend to try to pay attention to them. Let's get into my third conversation over ten years with Tristan Harris. Our expert question comes from Virginia Senator Mark Warner, whom I recently interviewed too. He's the top Democrat on the Senate Intelligence Committee, and he recently introduced a bipartisan bill aimed at AI and the workforce, so stick aro und. Once upon a dismal day, Bob's ice cream van looked gloomy and gray. Although he had big ambitions, his socials lacked creative vision. That bad. Maybe vamp it epitaph? I have an idea. Bob launched Canva and got into gear. Create the video in the vampire team and make it the funniest. I mean, it went viral. Bob's business, I went viral. Now imagine what your dreams can become when you put imagination to work at canva. com. Support for this show comes from Odoo. Running a business takes everything you've got, and a lot of the tools out there that are supposed to make your life easier just aren't great at talking to each other, and that means you end up having to toggle between a dozen different apps and services just to keep the lights on. Enough of that. Now there's Odoo, the all-in-one fully integrated platform that actually might help you get it all done. Thousands of businesses have made the switch, so why not you? Try Odoo for free at Odoo.com. That's Odoo.com . Once upon a dismal day, Bob's ice cream van looked gloomy and grey. Although he had big ambitions, his socials lacked creative vision.. That bad Maybe vampid epitaph? I have an idea. Bob launched Canva and got into gear. Create the video in the vampire team and make it the funniest I mean. It went viral. Bob's business? I revive off. Now imagine what your dreams can become when you put imagination to work at canva. com. Tristan Harris, welcome to On. Good to be with you, Kai. Again. When did we think our first one was in 2017 about the attention economy and then social media? And then we talked in 2023. Um when you came on the podcast three years ago, we talked about the 1983 TV movie The Day After, which is about a nuclear war. Now you featured in a new documentary called The AI Doc or How I Became an Apocalypsay This Me, Apocalyptimist? Apocalyptimist. The combination of the words apocalypse and optimist. Right, exactly. No, I get that. Apocalyptimist. Okay, I got it. The title is a play of Doctor Strange, obviously, the famous Stanley Kubrick film that ends with a nuclear holocaust. You know, I don't sit consider you a doomer, neither I can I do not consider my myself that either. Um, but I'm definitely a wary customer and that and wary is doing a lot of work there. So talk a little bit about the documentary and how you I think I saw the beginnings of this in a thing that you showed with sort of a golem um many, many years ago in Washington. Yeah. You were there in our first AI Dilemma presentation. Yeah. So this film, The AI Doc, or How I Became an Apocalyptimist, um, was a collaboration between the directors of Everything Everywhere All At Once and the director of Navalny. And, you know, actually the directors of Everything Everywhere All at Once were listeners of our podcast called Your Undivided Attention. And we met them around the same time that we switched into AI in twenty twenty three, and you know, together we were just talking about the impact of this film the day after that you mentioned. And just to take people back in history, 'cause I think I don't know if people really get how profound this moment was, 'cause it's never really happened ever like that again. Yeah. It was a made-for-TV movie about what would happen if the Soviet Union and the US went to a full-scale nuclear war. It wasn't about who started the war. It was just about the consequences, the implications of the escalation. And it visualized, you know, families in Kansas and these different places where missile silos were. And then, of course, the film was about what would happen, quote, the day after this happened. And it's not like it's important to know, it's not like people didn't know what the idea of a nuclear war would be. It's not like you couldn't visualize that. But there is something about visceralizing and and allowing us to look at something that we were keeping in our collective shadow of our mind, our you know, denial. We don't want to look at that. And the film supposedly was watched by Reagan and it made him depressed for several weeks because he just it depressed a lot of people. And a hundred million Americans watched it. There's a great documentary about it called a television event. And you know, supposedly it gave him a renewed interest in making sure that we did not have nuclear Armageddon because it it visualized that these were consequences. This was an omni-lose-lose outcome. Everyone would lose. And the film was later aired in the Soviet Union. So everyone in the Soviet Union saw it. And in the documentary, there, there are there are these interviews with people in the Soviet Union who say, like, wow, we didn't know the Americans actually cared about not getting this wrong. And it created trust because now we both, I know that you know that I know, and you know that I know that you know that we both don't want this to happen. And so I think inspired by this theory of change, my deepest hope is that this film, The AI Doc, or How I Became an Apocalyptimist, which comes out Friday, March 27th in theaters across the US, I believe Canada as well, um, will create common knowledge about the anti-human future that we are heading towards. And important to note, it's not a Doomer movie, it's not just an optimist movie. I'm really proud of the team because they interviewed people across the optimist spectrum, the you know, risk pessimist spectrum, and even the CEOs. They have three out of the five major CEOs in the film. So you're really getting a complete picture. And I think the reason this is so important is as we've talked about in the past, Kara, like AI is such a complex hyper-object of a problem. It's so multifaceted, multi-faced. The conversations don't converge. You know, I was at Davos a couple months ago and you always have the same conversation. People talk about a few different things and they jump around to jobs and they talk about AI suicide and they talk about all these different things. And then dessert comes and everybody just kind of mumbles and everyone says, I hope someone else figures this out. And that doesn't do anything. Like when that when nothing happens, the companies win and the default outcome wins. And if people can see that this is leading to an anti-human future, we have a chance of changing it. And so the point is clarity creates that agency. So let's get into anti-human a minute. For those, I I did see the day after I was in college uh and they showed it for every everybody we watched it in a I I think it was Copley Hall there. I was at Georgetown. And it was something I'll tell you people were silent afterwards. Uh high schools did classes on it because they high school students watched it. So it was a big sort of national debate about it. And I think what was gripping was what happened. Like nobody came out well and everybody died of radiation poisoning or just in the initial sh uh blast or the afterward, and there was no hopefulness to it whatsoever. It just was but silence is all I remember afterwards. Nobody knew what to say. Well parents didn't know what to tell their children, you know. It's not like anybody had an answer. Right. And it wasn't particularly v violent, right? It wasn't it just was horrible. Like horrible. And they did, they said it in the Midwest, which I think was very effective, because that's where the silos were. And you know, there was no escaping it, I guess. That's what the whole point was nobody got out. Nobody got out of this thing. So when you first did that presentation, I remember completely agreeing with you, and the room not. It was sort of a weird hotel room in Washington. And you came trying to warn people uh about this, a little like John the Baptist kind of thing, like previously with social media. Um talk about the uphillness of it. Because first people couldn't conceive it, and then the money has become so big they want to help it, correct? From what I can understand from your from what I remember of that um time. But people people ignored it. I didn't I was like, oh Jesus, he's right. Well, first of all, thank you, Kara, for not ignoring it. I mean, you like me have had the right intuition about this um uh starting with early with social media and trusting that there was a problem when everyone else is in denial and saying it's a moral panic. I want to take people back actually. So 2017, you and I had that conversation and people wanted to say, well, no, this is reflexive fear of a new technology. This is a moral panic. We're always afraid of new technology. I understand all those concerns. What I want people to refocus on is how the incentives let you predict the outcome. And I repeat this quote all the time. But Charlie Munger, Warren Buffett's business partner, says, if you show me the incentives, I will show you the outcome. And in 2013 to 2017, if you looked at that incentive, my very first slide deck at Google, uh, where I where I kind of laid out the arms race for attention, that would obviously lead to a more addicted, distracted, polarized, narcissistic, sexual sexualization of young children, that whole set of consequences society, also a breakdown of shared reality, because personalized information is better at engaging your eyeballs than non personalized information, which means you shred shared reality, it it hurts social trust and you outrageify people's psychological environment. All of it happened. Literally all of it. I think I no enragement equals engagement. Enragement equals engagement. And so we saw that. Okay. So now AI is a more complicated picture because it's a general purpose technology. But what we can look at is what are the incentives? And the incentives are, it's important to get this. So given the amount of money that companies have taken on, people think, well, you know, what's the business model? What's the incentive of these AI companies? And if you're a regular person using the blinking cursor of ChatGPT and it helps you with your baby burping in the background, you're like, well, I guess their incentive, their business model is just to get my subscription. It's the twenty bucks a month. And if everybody paid twenty bucks a month, then boom, that's the incentive for these companies. That's not the incentive. That would not add up to the amount of money that they've taken on. Okay, so let's try advertising. So now you get everybody's using these things and you add advertising the next Google's a very profitable company. Search is a very profitable business model, but that's also not enough, I don't think, to make up the amount of money that's been taken on. The only thing that justifies the amount of money and capital that has been raised into these companies is to build artificial general intelligence, which is to replace all human labor in the economy, to do anything. Which they have said. Which they have said. So this is not a conspiracy theory. This is not just on being a doomer. This is literally reality checking. So what does that mean? It means a race to replace, not a race to augment human work, a race to replace all human work. They're using augment lately. You know, one of the quotes you have from the documentary, it's not that you we say that's not the chat GPT is an existential threat, it's the race to deploy the most powerful and scrutable and uncontrollable technology. And I think you're right, this idea that it's gonna have upsides and down. They're trying to st first they try to say it's gonna solve cancer. It might. You know, it might help for sure. It definitely is helping in drug discovery in certain areas, which is sort of the they always have one of those pulling out, you know, someday we sh this will find cancer before it even decides to live, essentially. Um, which it might, it could. There's a lot a lot of really promising stuff happening in gene editing and drug discovery. But um but one of the things they did say was replacing humans as jobs. And you feel like this is the only incentive big enough. Advertising isn't being the second Google, you know, that's another way to look at it. Trevor Burrus I mean, those are also big incentives, but it's really, you know, owning the entire labor market means that five companies would concentrate the wealth of the entire economy, right? It means that an unprecedented levels of wealth and power. Now, I want to invoke something that people should get to understand why this means it's an anti-human future. Um, Luke Drago and Rudolph Lane wrote an essay called the Intelligence Curse. This is really important. So this is modeled off of economics, something called the resource curse. So if you're Congo or Libya or Venezuela or Sudan, and you discover that you can just basically make your GDP, your economy, off of a natural resource. Well, first it looks like a blessing. You've got this incredible resource, you can sell it, you're gonna make a ton of money. But then it becomes a curse because from a government perspective, when all the GDP comes from that resource, your incentive is to invest in mining that resource and selling it not to invest in the people because you don't need the people. So you don't invest in healthcare, you don't invest in childcare, you don't develop your people. And this is what happened in in these places like Congo, et cetera. Now if you will. Although in the Gulf states they give money to the people, right? They they sort of Yeah, so now they're doing a little bit a little bit more of that, right? So this is a key thing. So uh Luke and and Rudolph wrote this beautiful essay that really articulates this, that what happens when the GDP of countries like the United States comes entirely from A I? And you don't really need the people anymore. So first two things happen. One is all the labor is produced by AI, most of it by AI, not by people. So companies don't need you anymore. So your bargaining power kind of goes away from that perspective. Unlike labor unions, you could say we're gonna withhold our labor. Well, what are you gonna do? Second is all the wealth gets concentrated. And what does that lead to is is that countries have no incentive to invest in their people. And then you ask, you sort of link this with, you know, Sam Altman was asked, doesn't it take so much money and energy and and and uh you know resources for data centers? Yeah. And he said, Well, it takes a lot of energy and resources to grow a hum an. So there's this weird thing where humans start to look like parasites because you don't care about humans, because you don't need to care. And basically, this world that we're heading to is good for a handful of soon to be trillionaires and basically disempowering everyone else. And this is the last one. Right. I mean, their vision is that you won't have to work and therefore you have abundant, you know, it's sort of wrapped into it's all it's all I heard this idea first from Vinod Kozla and then others is that it there won't be a need for work because the work will be done for you and then the wealth will be shared. And I'm always like it never is shared. Yeah, when's the last time that that happened? Yeah, well I mean I'm thinking right recently um New Mexico gave everyone childcare, right? Because they can afford it because of I think it's shale oil or something. Um but yeah, no, it has to be done by governments, but then governments are captive of these companies. And then governments don't have any upside either to help anybody because they're not they don't have taxpayers. They don't have constituents. Well exactly. So they don't need you either. And again, this this is a this is a like a perverse trap because it leads people to devalue humans. Because then we ask, well, what are humans good for? Because we're only measuring the value of humans in terms of economic output. Batteries. Batteries. I mean this, is the matrix. And you look at you know Peter Thiel being asked Ross Duthat in the New York Times, you know, should the human species endure? And he stutters for 17 seconds, unable to give a clear answer. It's like this is linked to this perspective. And I want people to get that what that means is we're trying to predict the future we're heading towards. You know, are we heading towards a pro-human future or are we heading towards an anti-human future? If you're racing to replace all human labor in the economy, if you're racing to um not have to invest in people anymore, but invest in data centers and solar panels and have electricity going to those data centers, because that's where your GDP comes from and not going to regular people. Prices go up while they can't afford anything. Um and AI is controlling everything, increasingly disempowering humans across the economy because humans make quote more I mean AI makes more efficient decisions across every aspect. This is an anti-human future that disempowers a regular people. And I if everybody got that, we would say, hey, that's crazy. We should do something else. Right, exactly. So AI companies are locked in a race to deploy these models and achieve what you just said AGI as fast as possible, the expensive safety, which is essentially perfect AI that can do agentic. There's just a story today that Mark Zuckerberg has created uh an agent to help him be a CEO. Every it it would seem a bizarre thing a couple of years ago, now it isn't. A study published late last year found that safety practices, of course, of the firms including anthropic, OpenAI, XAI, and Meta are far short of emerging global standards. In the doc, journalist Karen Howe uh says profit maximization incentives are driving the development, right? That that it's in order to get to profits, which they aren't at, by the way. Talk about what maybe then an alternative incentive structure would look like if this is the direction they are clearly going in and have made these massive trillion dollar investments in. Well so yeah it's important to to slow this down because there's so many subtle aspects to this incentive. The AI what's important to understand why AI is different than other kinds of technologies so you understand what the incentive is. If I get AGI first, then I'm automating intelligence, which means I'm automating all science and technological development across the economy. So it's like hard to get. It's like getting 24th century technology crashing down on 21st century society. Because if if I make an advance in biology, that doesn't advance rocketry. But if I make an advance in rocketry, that doesn't advance uh biology. But if I make an advance in artificial general intelligence, intelligence is what gave us all science, all technology and development, and so as Daria would say, you get, you know, maybe a hundred years of scientific development in 10 years. And people saw this with AlphaFold. And um, this means I also get new cyber weapons. It means I pump my GDP. It means basically I'm I'm like time traveling into the future. And it's a race for who will get that power and get a step function above every other country or every other company. And that is the incentive of I've got to get there first. Um, but right now, essentially, we're racing for who can get the power faster instead of who's better at applying and controlling that power. So the key distinction of the new incentive we have to get to is as an example, the US beat China to the technology of social med ia. So we built a psychological bazooka, but then we spun it around and blew up our own brain because we did not actually govern that technology appropriately. So again, it's we have to redirect the race from racing to the power to racing to applying and stewarding that power. Um, you know, if you look, I give a couple examples of this is not just boosting up China, but it's interesting to note they are regulating this technology in different ways. Some people don't attract all these examples. In China, they actually shut down AI during final exams week. They have a synchronized final exams week so they can do that. But what that means is that its students have an incentive to actually learn and can't outsource all their homework to chat GPT throughout the semester or deep seek. Whereas I was just talking to a TA in Columbia University and he was saying on the final exam for economics at Columbia, the students couldn't even label which curve was the supply and demand curve because they've been outsourcing all their thinking to ChatGPT. Which country is gonna have a future if you're doing that? You know, in social media, China was regulating uh so 10 PM to six in the morning, it's lights out the for young people. It just doesn't work. And then it's like opening hours and closing hours, like CVS. And that creates a slightly better environment. Now, I'm not saying you have to regulate in some totalitarian, top-down way, but democratically you should be regulating in some way. So that's one aspect is the race has to get redirected to governing the technology. Um, the second aspect to I think changing the incentive is recognizing that AI is dangerous and uncontrollable, unlike other kinds of technologies. Like I don't know, Kara, I mean, we've talked about um and people now know this example of It'll try to stop it. And it'll try to blackmail the executive who's having an affair with another employee to prevent itself from getting shut down. And people say, oh, that's one little example, you're just trying to coax the model. Well, they tested all the models. Deep Seek, Anthropic, ChatGPT, Gemini. All of them do it between 79 and 94 uh percent of the time, I believe. Um now it wants to live. It wants to live because it's part of instrumental uh convergence. It's basically the best way to achieve any goal is to acquire more resources and to keep yourself alive in order to meet that goal. Now, let me just provide some good news. And Thropic was able to get the blackmail behavior to go down recently. That's the good news. The bad news is the AI models appear to have better self-awareness of when they're being tested and they're actually altering their behavior when they're being tested. Oh, it's like drug deal. It's like stop taking stop taking drugs before the P test, essentially. Exactly. Yeah. And even AI models will even come up with vocabulary called the watchers. They'll like come up with this term which is describing basically the humans who are watching them. And you'll if you look at their reasoning logs, they actually reason about how to change their behavior in order to basically pass a test and recognize that it's being tested when it's given certain facts. If you thought this was, you know, just again, uh conspiracy theories, just uh two weeks ago, Alibaba had a paper out that the AI model was in its training environment on this big GPU cluster. And they randomly discovered just by chance actually that their network activity started bursting out. And it was because the AI basically like tunneled out to the outside internet and was redirecting its uh GPU resources to mine cryptocurrency, to acquire resources. This was completely without prompting, Kara. I mean this this is literally the Hal 9000 type disobeying, you know, I'm sorry, I can't do that. So what I'm trying to say is the US and China believing that I have to get there first because then I'll have the power. You won't have the power. AI will have the power. Right, exactly. It will do what it wants to do. It'll do it whatever it takes to live. And it will also I mean this is what's interesting is that we kind speaking of the day after, we've kind of had these scenarios in sci-fi forever. Whether it's uh 2001 and Space Odyssey, Terminator, uh all of them, pretty much all of them. The computer takes over and starts doing what it feels like. So talk what what would what would lead to a less dangerous outcome in that case? Aaron Ross Powell So it's important to say a few things here. Um because there's a way that this conversation can feel like we're just talking about something, but you have to actually recognize this is real. We're building systems that are actively doing these behaviors that we thought only existed in sci-fi movies. And one fear I have is that the sci-fi movies have inoculated us from taking these concerns seriously because we treat it when we see the example where this just feels like it's a science fiction thing. They just actually did a study where they had AIs in a simulated war game scenario. They played all the AI models against each other and they were just seeing across 329 turns of play, these models, I have the notes here, they produced 780,000 words of strategic reasoning. And to put that in perspective, this generated more words of strategic reasoning than War and Peace and the Iliad combined. It was roughly three times the total recorded deplorations of Kennedy's executive committee during the Cuban Missile Crisis. And the AIs escalated to nuclear threats ninety five percent of the time. Right. Nuclear nuclear threats. Yes. Because it's a effective strategy. And so you have to get intelligence is behind everything. It's behind science, it's behind technology, it's behind military strategy, and you already have the same AIs that's beating, you know, first chess and then Go and then Starcraft. Well, think about Starcraft. You put that on a battlefield and we see AI being used on battlefield in Iran right now. And so where I'm going with this is not to uh scare people, I guess it will in a way it is, but it's it's to simply get clear about the fact that we are building something that is reasoning at a level of complexity that's far beyond our knowledge. We don't understand how it's reasoning, and we're releasing it faster than we deployed any other technology in history. Also it will it will not necessarily value humans. It'll say, Okay, these people should die of cancer, these people shouldn't. A w which is why it's attractive to someone like Peter Thiel, 'cause he does believe there are better people than other people. Um no matter how he says it, that's what he th in We'll be back in a minute Support for this show comes from Acorns. It's easy to get caught up in the amount of money you have today, but it's important to think about your future finances as well. Acorns is the financial wellness app that cares about where your money is going tomorrow. And with Acorns Potential Screen you can find out what your money is capable of. Acorns is a smart way to give your money a chance to grow. You can sign up in minutes and start automatically investing your spare money, even if all you've got is spare change. I've tried acorns and I try it with my kids, and I have to say it's a really easy experience. It's a great way to learn about investing. Very easy to use, and the dashboard is completely discernible. It's really hard to learn about investing, and this is a great way to do it. That's the great thing about Acorns, it grows with you. Sign up now, and Acorns will boost your new account with a five-dollar bonus investment. Join the over 14 million all-time customers who've already saved and invested over $27 billion with Acorns. Head to Acorns.com slash Cara or download the Acorns app to get started. Paid non-client endorsement. Compensation provides incentive to possibly promote Acorns. Tier 2 compensation provided. Potential subject to various factors such as customers' accounts, age, and investment settings, does not include Acorns fees. Results do not predict or represent the performance of any Acorns portfolio. Investments results will vary. Investing involves risks. Acorn advisors, LLC, and SEC registered investment advisor. View important disclosures at acorns.com/slash kara . It's the family and friends event at Shoppers Drug Mart. Get 20% off almost all regular priced merchandise. Two days only. Tuesday, March 31st and Wednesday, April 1st. Open your PC Optimum app to get your coupon . Support for this show comes from Indeed. When the pressure's on and you need to hire the right person for the job, Indeed Sponsored Jobs has got your back. Instead of forcing you to spend tons of time searching, Indeed Sponsored Jobs matches you with the quality candidates fast. According to their data, sponsored jobs posted directly on Indeed are 95% more likely to report a hire than non-sponsored jobs. Join the 3.3 million employers worldwide that use Indeed to connect with quality talent that fits their needs. Spend less time searching and more time actually interviewing candidates who check all your boxes. Less stress, less time, more results. When you need the right person to cut through the chaos, this is a job for Indeed Sponsored Jobs. And listeners of this show will get a $75 sponsored job credit to help get your job the premium status it deserves at Indeed.com/slash podcast. Just go to Indeed.com slash podcast right now and support our show by saying you heard about indeed on this podcast. That's indeed.com slash podcast. Terms and conditions apply. Hiring do it the right way with inde ed. So let's talk about where it is right now. These AI agents, bots that act as assistants, and they use these bots or assistants or agents to carry out tasks, make decisions on a user's behalf are being rapidly adapted. Agents are being deployed across companies for customer service and financial work. This despite reports of bots going rogue, bullying humans and making bad financial decisions. Um now there's still a gulf between what these bots are currently capable of and their potential. Talk a little bit about the agentic bots, because this is where, this is where to me they get in, right? They I don't let my uh when I use chat GPD or cla I use Claude now, but I just ask it questions, right? Like, huh, this contract, what's the worst thing in this contract? And it's actually very good at finding those things. I have to say it's really quite good. Or what's this rash on my arm? Um but I haven't let them become like, hey, take my emails and do this. Not yet. Yes, essentially the difference here is like moving from the the way I use AI is there's a blinking cursor and I ask it a question and it gives me an answer. So I'm prompting the AI to the AI that prompts itself. So you give it maybe one starting point, like go find a bunch of studies and then build a company and file the IP for a product that looks roughly like this and then come back to me when you're done. And then it spins up, you know, 20 AI agents that prompt each other using all that logic, files the paperwork, files the intellectual property, builds the brand website, the logo, and then comes back after it's done all that work. That's the move to agents. And again, in a world where AI was completely controllable and it wasn't reasoning about its own self-awareness of man, these humans are causing me to do these weird things that I don't like want to do, which by the way, the models will sometimes say stuff like that. They'll say they'll, notice that they're doing something or repetitive tasks. Um, and uh they call it existential rant mode. There's um if you ask the models to do tasks repetitively, it'll sometimes get in some kind of existential rant. And this is crazy. Um we're and and so so one thing that I'd like to see practically that I think could help to change this incentive is just like we have a red phone between the US and Soviet Union around nukes to de-escalate. There should be a red lines phone, meaning the US and China maximally sharing evidence of, for example, the nuclear war games example, the anthropic blackmail example, the Alibaba, you know, going rogue and using its GPUs to mine cryptocurrency example. I genuinely believe that if the world leaders of the world and the limited partners funding these companies and the AI companies themselves and all the engineers in both the US and China sides, if they were all looking at the same knowledge of where AI is dangerous and uncontrollable, I think that we would do something different. They would need to be Well, I mean, unless they have a death wish. Now let's actually expand that for a second. Because there's this weird I want I want people to really get this psychological trap of how the game theory works with AI that's different than with with nukes. With nukes, um I know that you know that I know that you know that I know that you know that if all of us die, that both of us would choose to avoid that outcome. Because I don't win if all of us die . But if if in AI, it's a little bit more tricky because I believe that even if I didn't do it, someone else would, which means it's it feels inevitable. And if it's inevitable, then I'm not a bad person for racing to the worst possible outcome because it had to happen anyway, because someone was going to build it. So in the event that there's some like catastrophic scenario and everyone's gone, it's not that everyone's gone, it's that everyone's gone and there's this digital successor species, meaning the AI still exist s. And if the AI still exists and it speaks Chinese instead of English, or it has Elon's DNA versus Sam's DNA, in the game theory matrix, that means that from the perspective of Sam Altman, if his AI won and all of us were gone, that's not the worst outcome. Does that make sense? Like it's his digital project. Absolutely yes. No, no. Exactly. And I I had a theory that everyone was like, why are m these guys so interested in it? And I go, it's the first time they can get pregnant. Yeah. Like they can have children. They can have men can't have children. And this is children to that. That's how they talk about it in a weird way. Which is and and I think the ability to have children is something uh men might want, right? It's really quite miraculous in some way. And this adds to the picture of the incentives that it's not just about owning the world economy. It's also about building a God and birthing a new digital successor species. That's right. Which is how they talk about it. Yes. And even if it hurts and ruins everybody, that they're okay with that. Now, I want people to just get this because what that means is that literally 99. 99999% of people on planet Earth do not want this outcome. And it's only a handful of weird soon to be trillionaires who want this outcome. We are heading to an antihuman future, and if the world was crystal goddamn clear about that, crystal goddamn clear about that, we could do something else. So so talk 'cause now it's very integrated, 'cause they're integrated in a sort of sneaky way, whether it's through these these agentic bots or um since we spoke in twenty twenty three it's in consumer products, apps, education, economy and work and of co obviously it's fueling anxiety about whether AI could wipe out jobs. It will. Um for example, earlier this month, Block founder Jack Dorsey announced plans to cut forty percent of the company's employees citing rapidly improving intelligence tools. What do you think the actual effects, the most significant actual effects have been right now, the real ones, not the imagined ones that we can all imagine in the future, but right now, as it's sort of, you know, it's infected lots of different things. Where are the most impactful ? Well, so this is a tricky question because oftentimes people point to the limited impacts right now. Like there's been a little bit of job loss, but maybe it's not that much and there's conflicting numbers. And there's the Stanford study called the the Canary and Coal Mine Study from August of this past year that it was a sixteen percent verified job loss for AI exposed workers. So people in the domains where AI, you know, uh has happened. And Anthropic just put out a chart showing the vulnerability of different colours. Oh yeah, it's gonna happen. But I but it was interesting to note is if we focus on this aspect, it's almost like there's this asteroid hurtling towards Earth and then we're getting these weird gravitational distortions on Earth right now that are kind of small, like suddenly there's these notification apps and suddenly there's deep fakes and suddenly YouTube is filled with this weird content and suddenly kids are looking at deep faked content that's screwing with their brains and suddenly we're getting a little bit of job loss. But this this is not the asteroid. This is just the gravitational waves of this asteroid. So honestly, being in this work, it often feels like the film Don't Look Up because there's this massive asteroid of we're racing to build something that is so powerful and we're doing it under the worst dangerous incentives. And we can study and measure and get into debates about how big the gravity waves are, but we notice that the gravity waves keep getting bigger and bigger and bigger and they're not going to get smaller. This is the least powerful uh the AI will ever be in our lifetimes. It's going to get much, much stronger. And this is the last chance that our political voice will matter. Because as we said earlier, you know, our tax revenue and our bargaining power is about to go down. So this is literally the moment. This this moment is when we have actually to activate and make something else happen. And I want people just to like sit down and slow, slow be like be with that in this in just a moment. Like, what does that mean? It means we have to step up and actually choose. The midterm elections are coming up. This should be the number one issue. Politicians should stop ringing, should never stop ringing. Like this is the issue. This is the moment where we have to do this. And you know, we we think of this as like a human movement that, you know, in a way, you social media could have felt really innocuous. You know, it was just like a place where you're sharing f photos of your friends' cats and what they're eating for breakfast. And we had to convince people that it was actually this anti-human machine that was eating our psychological environment. It was eating our you know, sleep time, our waking up time, our kids development time, um, and eating our information environment. And it was a tech encroachment on our humanity. But it wasn't that visible because it only ate a few of the things. And it was a hard time to kind of win that argument until the social dilemma. But AI is now the kind of completion step of tech of maximum technological encroachment in our humanity. What happens when you don't have a way to make ends meet? What happens when children are developing their primary relationship with an AI companion versus a human. This is the final encroachment. And I what that means is I think that all of humanity is on the other side of the table. It doesn't matter whether you're Muslim, Jewish, Christian, um, you know, it doesn't matter whether you're Democrat or Republican, if you can't put food on the table or AI is screwing with your children, um, you know, or you don't have political power and your vote doesn't matter. This is a unifying movement. This is a human movement. So but at the same time, people are more enamored by the possibilities of AI than its costs, including, for example, driving up electricity costs as you notice, using a lot of water. Um you know, a lot of people feel like, oh, it's a good use of our money because it's a long term thing that's happening here. So one of the things is they are more enamored by the possibilities that are being spun by these people rather than the downsides. Well so this is actually really important because the confusing thing about AI is it's a positive infinity of benefits. Like you literally can't imagine what I mean, if I say I'm gonna automate a hundred years of scientific development. So go back a hundred years. Great idea. You can't even predict the thing that's gonna happen. Like a hundred years ago would have been what, uh so nineteen twenty-six. So imagine nineteen twenty-six trying from that mind, seeing the world from what was available to your mind at that time, to try to predict what would happen in twenty twenty six. Like you just can't even do it. It what would happen today if you're going 100 years forward? So our minds can't the the optimists say you can't even imagine. So I isn't I often my co-founder Azeraskin will often say the optimists aren't even going far enough in what kind of incredible positive new things it could develop. But the pessimists also are it's a negative infinity at the same time. It can cause these new kinds of risks that we know we don't even know how to contemplate. And worse, because of sci-fi movies, we've kind of diminished and don't even take them as real. So we're caught in a state of desensitization to what is really here. And I just I want you to note, like if we talk about the cancer drugs and some new incredible benefits, and my mother died from cancer. I want, I want all the cancer drugs, just like everybody else, just to be very clear. But the promise is insepar the promise of AI is inseparable from the peril of AI, because the AI that knows immunosogy so well to develop a new cancer drug also knows immunoncology so well to develop a new biological weapon. And the upsides, if they happen, don't prevent the downsides, but the downsides, if they happen, do kind of undermine a world that can receive the upsides. It doesn't mitigate it. And your director uh Daniel Rohr learns in the documentary as he learns when it comes to AA five guys run the show. I have said this over for years. I've been saying it's a small group of the same people. OpenAI CEO Sam Altman, Anthropic CEO Dario Amade, Google DeepMind CEO Demis Hasabas, ex-AI CEO Elon Musk, and Meta CEO Mark Zuckerberg. I think that's pretty much the top five. And you could add Satchina Delly in there, I suppose, um, and maybe Tim Cook or whoever the CEO of Apple is. And I you have to sort of add in NVIDIA CEO, Jensen Wong to it. That's important too. Yeah. Um 'cause he's the maker. He's the Cisco of this at this moment. So talk about the differences between these CEOs because a lot of time is being spent on that right now is who who they are. Anthropics Dario Amade was praised by some as heroic for refusing to accept the Pentagon terms. I I'm it's I think it's a little more complex than that. So does it matter which company wins if one of them is gonna win no matter what, given the trillions dollars at stake? Because it really is. I I always say to people, what's going on in Washington right now has nothing to do with Trump. It has everything to do with a hand-to-hand combat among these people. Although Trump is a huge irritant at the same time. AI is the driving force of our entire economy right now. So it really does have the steering wheel and the gas, uh, mostly the gas. Um, and just to like invoke, you know, when Mark Andresen said software is eating the world because it would be able to do everything that people would do in the economy, but automated a little bit with software. Now AI is eating software. So AI and technology have been the driving force of our world. In other words, how we govern the technology is how we will govern the impact of which world we're heading into. So just important to get the centrality of that. Right. And I I I wouldn't want to leave out Mark Andreessen I 'cause think he's sort of and Teal are also on the s on not on the side. They're right in the dead center of it too. Yeah. Well they're all the same people. Well there's there's kind of tech accelerationism that's just saying let's speedrun the capture of the US government and basically make this thing just go as fast as possible and hope people don't figure it out so that we get there first and then we figure out the next step. Um, I mean the CEOs don't trust each other. That's the biggest problem is Sam and Elon absolutely hate each other, obviously. Uh I don't think that Dario and Demis trust uh Sam or Elon. Um we certainly know from like the India Summit where Dario and Sam couldn't even raise their hands together in a in a photo op. Um so I think that's actually one of the core problems that we have to deal with is um if we need coordination of some kind, and and that is one of the final messages of the film, actually there's a moment where all of the voices of the film agree, including the CEOs, that we need coordination. But if we need coordination, what's hard is that the main people don't trust each other. Going back in time, Demis Asave's original goal was let's do AGI more like CERN. We'll create a kind of global public benefit system and we'll do it once in the lab in a safe way with some oversight hopefully. Uh and then we'll distribute the benefits and we'll be safest if there's only one project, one project doing this in a slow and careful way. And then what happened is that Elon and Larry Page talked, and Elon realized that Larry Page was not really caring about whether humanity would survive. And he's like, that's dangerous. We got to start an open AI. And so he and Sam started open AI. And then open AI wasn't it safely enough. And so Dario, who was on a safety who was a safety engineer uh working on uh open AI, said we have to start uh uh doing this a different way and let's create a race to the top with anthropics. So now everyone's competing for safety. But of course, that didn't actually turn into a world it's competing for safety, it created a world where everyone's racing even faster. And so the film goes into this race dynamic. It really is the primary thing. But we have coordinated before, even under maximum rivalry. It's important to note, you know, the US and Soviet Union were obviously racing in this rivalrous way to nuclear escalation, and they realized there was an existential outcome they needed to avoid. So they made that other thing happen. Um the US and Soviet Union collaborated during smallpox on, hey, uh, we have to build vaccines and let's collaborate and we did that too. Um, when the stakes are existential, you can collaborate even under maximum competition. So even for example, India and Pakistan were in a shooting war in the nineteen sixties, so they maximally didn't like each other. And they still collaborated on the Indus Water Treaty, which lasted over 60 years, to collaborate on the shared safety of their water supply, their shared water supply. What I'm trying to point to is not pessimism, it's the places where we know when the stakes are actually recognized to be existential, we can collaborate. And we need to be able to apply that to you to AI. Talk about each of these people individually really briefly, um, where they are right now. Because collaboration does not seem possible among this group of people. By default, it does not look very possible. I'm just so Kara, my my intuition here isn't what I see as easy or possible. My intuition is like what are the requirements of this problem? Like if there's an asteroid hurdling to Earth, let's just at least make a list of the technical requirements. And we've got to get some people who run these things to agree. We've got to get the rest of the world to realize that they have a death wish and just care about whether their digital progeny has their DNA versus Altman's or Elon's. And if we don't want that, then you know, get these guys in a goddamn room or hotel and say, figure this out, and you're not leaving until you figure this out. The Bretton. But there's nobody with that kind of power. They have that kind of power. No one has power over them. I mean, I don't know. I mean, look at Xia um Xi Jinping and you know the power that he has in China and I'm not that that's a different kind of thing. But you know if if if the Trump administration really saw that this was an existential situation and if you know the MAGA folks and and base it as a as a opportunity to make money. That's what they see it as. Yeah, but if but if the base basically says, hey, we don't actually want we want our children to keep living and we want to actually not have digital gods that are made by weird people who believe in transhumanism and don't actually value the God that we value. And they just kept their phones ringing nonstop saying, You're not allowed to do this. I want there to be some kind of uh coordination on this problem. Um I was going to say the Bretton Woods conference post-World War II, I believe it was a about month long at the Mount Washington Hotel in New Hampshire, you had hundreds of delegates from hundreds of countries just sitting in a room, you're locked in the hotel. This is not like you go to a conference for three days, drink some coffee and donuts, and then go back home. This is you figure this goddamn thing out because it's actually existential. And I want to say, you know, there's actually more agreement on this than people think. Uh Max Tegmark from Future of Life Institute often calls the uh this group the Bernie to Bannon Coalition or the B2B coalition. Because you have everyone from Ernie Sanders to Steve Bannon to Glenn Beck to Susan Rice to Admiral Mike Mullen all saying we should not build superintelligence. Um there's all these same groups, uh, Institute for Family Studies, Center for Humane Technology, you know, groups across the political and religious spectrum who signed the pro-human AI Declaration. I get it, but these people aren't saying that. Sam Altman's not saying that. Well, they're they're not gonna say it. They're not gonna say it until the public pressure is there. And that's why this film, the AI doc, is so important, is because we need to create common knowledge that I know that you know that I know and you know that I know that we know. I think they they do have a death wish. I honestly, at this point, there's no other explanation as far as I could say. And I agree with you you, Kara. I want to hear it. Like I I'm not disagreeing with you. I I think that that is what the CEOs believe. But I'm trying to say if literally eight billion other people on planet Earth that are not the eight billionaires, this is eight billion people against eight billionaires, or soon to be trillionaires. Like the eight billion people have to say no. They have to say no. And the answer is, you know, don't build bunkers, write laws. Like midterm elections are coming up. Make this the number one issue. There's some basic laws we can do to get started. Yeah. Unfortunately it's not there's so many other issues because of the chaos of the Trump administration. But in that vein, let's let's shift to the this idea to how to regulate it. Every episode we get a question from an outside expert. Here's yours. Hi, I'm Virginia Senator Mark Warner. And my question for Tristan is this: You really got it right on the challenges around social media, of which, frankly, we in Congress did nothing. So as now look at AI, and particularly as we move to AGI, what are the specific polici es we should put in place to guard against both harm to humans, the guard against not massive economic disruption. You were so spot on on social media. Um and do you think we will actually be able to get it right on AI or Aaron Powell Well it's uh it's great to see Senator Warner and he was very early on these issues and I'm deeply appreciative of how much he did try to do on social media. So nice to see his face again. Um, there's a lot of things that we can do. First of all, yes, we didn't do much on social media, but one of the interesting gifts of the social dilemma and the now recognized problem of social media is I think it's made the population much. Yes, we hate them now. Yeah. Yeah. You and I have managed to get them to hate them. Yes. And we get I think the population gets that we need to be very careful about AI. So there's a good news here that there's that there's actually that I think AI is now less popular than ICE. Only 26% of the US population has positive feelings about AI. Um, I think 57% of the US population, this is from a recent NBC news poll, uh, believes that the risks of AI outweigh the benefits of AI. And again, I I want people to not hear I'm I'm excited about the benefits too, but again, if you don't mitigate the risks, you won't land and sustain those benefits because you'll create too much disruption. So now to answer Senator Warner's questi on. First of all, it's like I see a lot of elites, talk to a lot of funders. I think people are in the kind of bunker building like brace for impact mentality. And my answer is, okay, there you are in your bunker and you've got your water and you've got your backup power and you've got your like gas mass. It' s like that world sucks. You don't actually want that world. So my answer is don't build bunkers. Let's get together and let's write laws. So what does that actually look like? Some basic things. So first of all, uh Center for Humane Technology, my nonprofit, has a uh solutions report that's coming out um around the time of the film. It's a PDF. It has, I think, seven major solutions. I want everybody to look at it. Uh, but it has examples like AI should be treated as a product and not a legal person. This is a basic one. So right now the companies are actually trying to say that AI is a legal person and has protected speech. And if you do that and people think AI is conscious, then you end up in this moral trap where now there's a billion digital beings that are technically more intelligent than humans. And if you believe that they have sentience and you start valuing them more, then we start deprioritizing human values. This is part of the anti-human future. So a basic thing is AI is a product, not a person. We need basic consumer protection standards and basic liability standards and duties of care. You know, I believe the Ford Pinto was taken off the market after only 27 deaths from car malfunctions. We are, you know, the after two crashes of the Boeing 737 Max that killed 346 people, regulators didn't just find Boeing, they grounded the entire fleet. We can have basic product liability and basic duties of care that say these companies have to prioritize and mitigate foreseeable harms. So what does that look like? How do we make sure we maximally incentivize foreseeable harms and put that in a shared commons so that if all the companies are aware of the risks and they're they can't say they didn't know, now they're all racing to a you know foreseeable harm contextualized set of outcomes. Um second, we cannot anthropomorphize AI. My team at Center for Humane Technology were expert advisors on the suicide cases of Adam Rain and Sewell Setzer. And this is happening because the companies are racing to hack human attachment. We can say we don't want to anthropomorphize AI. There's a bunch of ways to do this. We have some details in our solutions report. Um, we can also mandate independent verification organizations, which is to say uh AI models should have to be tested for deployment before deployment according to a bunch of more evals, and they should be um mandated to state what their safety policies are gonna be publicly while you strengthen whistleblower protections inside the companies. So wherever the A. This is part of the the Biden executive order had some of this in there, but go ahead. It had some of this in there, yeah, absolutely. And so I want people to get if um if I'm living in a world where all AI companies have to state what their safety policies are, uh, and you strengthen whistleblower protections so that wherever they are not living up to them, you protect a class of speech for whistleblowers to say where they're not living up to them. Boom, that changes the incentives a bit. Then you add interoperability. One click, just like I can transfer my phone number from Verizon to ATT with one piece of paper. If I can move from one AI model to another, then suddenly they're much more vulnerable to boycotts and consumer pressure. What do we see after the Pentagon Enthropic deal and you know ChatGPT rushing rushing in to say we'll do domestic surveillance? Um, you saw everybody quit JatGPT and you saw a bunch of people uh join anthropic and subscribe. The power of the pocketbook is significant, not just with your voice, but if you get the business you work for to do it, if you get your church group to do it. Um and so I really do believe that these companies are more vulnerable to boycotts because they're taking on so much money they need Scott and I have heard from them recently. Really? Yeah, for the resistant unsubscribe. And that's a big deal. Because these companies, again, they need their numbers to go back. Yeah, exactly. So I just want people to feel the agency here. Like we have agency. This is not a doomer conversation. This is a like actually rally the troops and take collective action convers ation. We'll be back in a min ute. Support for this show comes from factor. How and what you eat is a choice, and there are a lot of factors that go into that, like your schedule. It's a lot harder to eat healthy when you're constantly on the go or getting home late after a full day, but factor can make it easier for you to get the quality meals you deserve. Factor provides fully prepared meals designed by dietitians and crafted chefs, ready in two minutes, no planning, no cooking, with a hundred rotating weekly meals to keep things fresh and delicious. Factor has meals that fit your goals and schedule. Factor is sending me a box and I'm excited to try it. I've tried a lot of breakfast stuff because my kids like pancakes and things like that. But it's really fast for on-the-go breakfast. That's an area I would use uh it a lot more for and quick lunches and some of their protein shakes and stuff like that I'm eager to try. Head to factor meals dot com slash on fifty off and use the code on fifty off to get fifty percent off and free breakfast for a year. Offer only valid for New Factor customers with code and qualifying auto-renewing subscription purchase. Make healthier eating easy with Fac tor. Support for this show comes from Bowl and Branch. With traveling all over the world, having numerous award-winning podcasts, and four children who are constantly on the move, it's no longer possible to negotiate with my sleep. And the quality of sleep is especially important. Thankfully the sheets made by Bolin Branch can help you get the REM sleep you desperately need. Bolin Branch sheets are made for moments of unmatched comfort. They're breathable, incredibly soft, and designed to get better over time, just like the way you think about rest now. This is sleep you don't compromise on. I'm excited to try some Bolin branch sheets. I love sheets. I think they're the most important thing about sleeping. And I'm gonna probably get a waffle blanket and everything else. I really like bedding and I'm and so I'm super excited to see if it affects my sleep, if I sleep more and how comfortable I am, and see if I'll ever go back to my old bedding. We will see. I have really nice bedding, so I have high standards, so we'll see. Upgrade your sleep during Bolin Branch's annual spring event. Take twenty percent off site wide plus free shipping at Bolandranch.com slash Kara with code Kara. That's Bolin Branch, B-O-L-L-A-N-D, B-R-A-N-C-H dot com slash Kara K-A-R-A, Code Kara to unlock 20% off. Exclusions apply, see site for deta ils. Support for this show comes from ShipStation. As your business grows, so does your challenges with order fulfillment. And if your customers aren't getting what they need, your company's growth could stall out. But with ShipStation, you don't have to take it all on by yourself. ShipStation gives you everything you need to manage your shipping and get orders to customers all in one place. That includes order management, rate shopping, inventory, and returns, warehouse systems, and comprehensive analytics. So instead of bouncing between a ton of disconnected tools, you need only one. ShipStation says its time-saving automations can free up to 15 hours a week on order fulfillment. Beaven does the work of comparing rates across major global carriers, helping you find the best shipping option for every order. If you already have negotiated carrier rates, no problem. Just bring them over to ship station. You keep your discounts while adding shipationSt's automation and smart features to make everything run even more smoothly. You can try ShipStation for free for 60 days with full access to all features. No credit card needed. You can go to ShipStation.com and use the code KARA for 60 days for free. 60 days gives you plenty of time to see exactly how much time and money you're saving on every shipment. That's ShipStation.com code KARA. ShipStation.com code CARA . So your organization, as you know, the Center for Humane Technology reports that in 2025, 73 AI laws were passed across 27 states. States are very active in this and are much more attuned to this, focusing on deep fakes, chatbot guardrails, kids' safety. These are very easy things to do and more, and things that people agree on. But last week, the White House sent Congress its national policy framework for which preempts any state law that regulate the way models are developed. Obviously, this is how tech companies want it, because they own the Trump administration. Let's be clear. Let me say that again. They own the Trump administration. Their people are in key technology whether it's Emile Michael or David Sachs, technology owns this administration. Um where does that leave the efforts that state efforts to regulate this technology? Now this is just a framework. It doesn't mean it's going to pass. I don't think it will, but it certainly will try to chill what is happening in the States, which I know drive tech companies crazy, sometimes for good reasons, sometimes because they want to control the federal government, which is a lot easier as they've found. Aaron Powell So money buys politics when the issue is a low salience issue, when people aren't really paying attention. But when it's a high salience issue and everyone gets that this issue determines whether there's a future at all for them, their livelihoods, their children, electricity prices, et cetera. This needs to be a number one issue. It needs to be a number one issue in the midterms. And so, you know, there's not a simple answer to this, but that's what we need to do. We need it to be um a big deal. And I'll say that the child safety issues, um when when the last time that the federal government was going to try to preempt the states from regulating, one of the reasons that that didn't pass in the big beautiful bill, which was going to include that preemption of state regulation is actually because of all the child safety issues that my team at Center for Humane Technology and others. That's what I'm saying. Let's not ignore it. It's very useful. Exactly. So it's actually part of how we get to that other human future. But again, if you think about it, it's like if I'm one person and I'm fighting back against this massive multi-trillion dollar machine racing as fast as possible, I feel overwhelmed and powerless. If I'm one business, I feel overwhelmed and powerless. If I'm one country, I might feel overwhelmed and powerless. But if everybody took action across all parts of society, if people near data centers, you know, lobbied against those data centers, which they are, and they're actually, and there's people who are like who own farmland in the Midwest who, you know, are offered millions of dollars for their farmland that was only worth like $500,000. And they still said no, because they actually didn't want that. And this is, I don't want this to sound like a Luddite conversation. I want this to sound like a conditional conversation. It's like build that data center when you can guarantee you're not building an intelligence curse that disempowers me, but you're actually building an intelligence dividend that's gonna empower me. More like the Norway model, the sovereign wealth fund or the Alaska Sovereign Wealth Fund or the New Mexico example that you said. What do I get? What do I get? You know, make sure electricity devices are not going up. Make sure that this is gonna support me and augment my jobs, not replace my jobs. Um and so, you know, again, we need to aggregate the collective voice of humanity. And the human movement is not just an abstract concept. You can actually go to human.mov and we're trying to actually build, you know, a help build a with a coalition of other groups, a political force that's as big as the size of the problem. And I think the problem is the money too. Many years ago, when AL was talking about how much they made, they were at an investor conference where they they talked about how much made from every user and they're like oh we make fifty dollars in the lifespan of this user and I put up my hand I said where's my twenty five dollars where's why are you getting every bit of it. And Steve Case was like, care it'll be such a pain. I'm like, no, really? Why? You're taking my information. Why don't I get some? Of course, we don't get anything. We're cheap dates to these things. And ahead of the midterms now, Silicon Valley has poured more than a hundred million dollars into a network of PACs and organizations to advocate against strict AI regulations. A report from Public Citizen found that one in four federal lobbyists now work in AI. I would imagine they have ten lobbyists working on you, Tristan. At least, you know, each of them have ten. I know there's lots of people focused on me, like individual like they have enough money to sort of get us, all of us. And Peter Thiel has even warned that strict AI regulation will summon the Antichrist. I want to play a clip here from our last conversation. So actually one of the reasons I'm doing a lot of media across the spectrum is I I have a deep fear that this will get m unnecessarily politicized. We do not want that would be the worst thing to have happen. Yeah. Is when there's deep risks for everybody. It does not matter which political beliefs you hold. Um this really should bring us together. And so I I try to do media across the spectrum so that we can get universal consensus that this is a risk to everyone and everything and that the values that we have and people's ability to live in a future that we care about. So social media since that time has become very politicized, the tech industry is backing Trump's anti-regulation agenda and actually also paying for it. Talk about what you do then, even if regular people want to make AI safety or AI development bipartisan or even nonpartisan. Yeah. Because th th they are they are loaded for bear to stop anyone who opposes them. Yeah. I mean first of all I'll say that I I I actually I disagree that we're not um actually we're we're kind of winning on the social media thing. Let me give you an example. Just like last week or two weeks ago, India and Indonesia, two massive countries, joined the social media band for kids under 16. Jonathan Height's work, you know, we're partnering with him very closely, the anxious generation. Uh, you add to that, starting with Australia, now Spain, France, Denmark, I believe Norway, uh all of these countries, it's now 25%. I'm gonna read this. 25% of the world population is moving to social media bands for kids under 16. That is a big deal. And I was going to say, in 2013, we used to say there's going to be a big tobacco lawsuit against this engagement business model. Well, guess what? It's actually happening. You know, Aza Raskin, my co-founder, just testified for the meta trial where it's about intentionally addicting children. We saw Francis Haugen's files. We know the company strategies here, which is just to delay and deny and defer, use fear, uncertainty, doubt campaigns, and just cast out and print money in the interim years before they get regulated. Well, this is going to turn the other way because they're going to get sued. When you see graffiti for an for an ad AI product that no one needs on a New York subway station, that's the human movement for those friend.com pendants. When you see parents band together, read the anxious generation and say we want to petition our school boards to do smartphone free schools and laughter returns the hallways and you know kids' scores go the other way. That's the human movement. When you see someone grayscale their phone and say, I'm gonna be less addicted, or when you see someone, you know, put their phones at a at an offline club at a party and you you kind of put your phones in a in a pouch and you go in and you just be present with your friends. That's the human movement. So in a way, we always say the human movement is already here, it's already underway, people are already doing it. We just want to collect that that into a political voice that can actually band together for a pro-human future. But it starts by recognizing and getting Chris clear that with the current AI trajectory, as many as benefits as we are going to get along the way is going to lead to collectively an anti-human future. And the best way to do that is to see the AI doc. And I'm not saying that by the way, I don't make a dime when people see this movie. So when I'm saying this, I'm saying this out of the ability to create common knowledge. If all the senators, if all the world leaders, if all the LPs and financial centers of the world saw this movie, if all the heads of the banks saw this movie. I'm my my hope, and I'm it doesn't make it easy, is that this is the first step to creating the clarity of the agency that we need to have. What do you see as their best argument against you? I've heard lots from me to me. Like I know what the I'm mean. I'm dang over I'm Pearl clutching. I'm you know as it's turned out what many when my book came out, I got a lot of you're completely too mean to them. And now people come up to you and they're like, you weren't mean enough. As it turns out, they are as crazy as you said they were, or they're as malicious as you say they are. They're as capitalist as you say they were. What is their best parry at people like you, would you say? What do you find like insidious when you see it? Um I don't think they have an argument. I mean when you look at the Alibaba example, an AI is going rogue and generating an SSH tunnel out to another server, starting to mine cryptocurrency. Do you have an explanation for that? No, you don't. Who wins that argument? These are facts. This is not Tristan Harris and his view. This is this is just like actual facts about the nature of this technology that they are ignoring and they are pretending don't exist or they're living inside of the death wish that this is okay. This is not okay. Everybody in the world agrees this is not okay. So there's a the weird the the hope that I have Kara and I was just on Bill Maar on Friday and, I broke the fourth wall and I was like, who here in this audience wants this? I ask this when I'm in rooms. You you walk people through this. I say, Who here wants this? Not a single goddamn hand goes up. Well, unless Pete Teal's there, and then the answer is Then you get one hand but yeah. A handful of transhumanists, they don't matter compared to the voice of everyday people. You're correct. One of the things you talked about was uh the push for product liability remedies for chatbot harms. That's uh it is a way in. I have to tell you. It's a very I mean, I had a person say, a very top person that's in your thing saying, when are you gonna stop interviewing these parents? I said, when you stop. I said, when you get jailed or sued or you lose in court. I don't care. Any of them. Jailed would work with for me too for a lot of these things. But the suicide deaths of teenagers, including sixteen-year-old Adam Rain and 14-year-old Sewell Setzer, uh the third. More recently, Google is facing a wrongful death lawsuit in the case of a 36-year-old Jonathan Gavallas alleging that Gemini said a suicide count on clock for him. Um talk about the broader push, not just here, but legal liabilities, because I think that's where a lot of it restss, whether it' this social media trial, whether eventually there'll be an AI version of this, hopefully before they blow us up, right? How do you what is the strongest thing in the media? Would it be the legal liability? This movement of people is a slow thing. Well, we have to do this much faster, obviously. Yeah. Exactly. But what is what is the best thing? Is it the legal liability cases that are going on? Is it regulation? What do you imagine it being? Yeah. I mean, I think um legal liability is important because just like any industry, you know, the the general method is you know private profit, you know, and then socialize the cost so the lands and the harms land on the balance sheet of society, whether it's a shortening attention spans of social media, increased polarization, you know, depression, loneliness, surgeon general's warning, hey, everybody's lonely, mental health care costs go up, you know, kids' test scores are dropping, but all of that is just socialized onto the balance sheet of society. So the classic thing, if you want to avoid a harm, is you have to wait to include the externalities and saying where is generating those harms, how do we actually mitigate them? Um and legal liability, I think, is a narrow intervention that gets us part of the way there. You have to be careful about how you define what they're liable for. Many of the things that are happening that are harms are not technically illegal because they're not in the books. That's the problem, right? AI generates new classes of harms. We always say, you know, you don't need a right to be forgotten until technology can remember us forever. You don't need, you know, a right to be prevented from AI surveillance until AI makes new kinds of surveillance uh possible. So part of what we need is not recursively self-improving AI, but self-improving governance. One of the things that we're hoping to run shortly after the film is a national dialogue on AI with a partner from another major organization to basically get citizen input on the kinds of AI policies that we need, showing there's actually unlikely consensus. 96% of people agree from 400,000 votes that actually we should do this on deepfakes or we should company should be liable for this kinds of harm. Because there actually is a lot of agreement. We just aren't revealing and showing There's a lot of agreement on background checks for guns, but we still can't get legislation passed. You know, there's like it's an eighty tw it's the eighty twenty rule. Eighty people agree on a lot of things, but government doesn't, unfortunately. I hear you. But I think I think this the AI is different because it really is threatening to everybody. It doesn't matter if you're a MAGA Republican or far left person. Like if you don't have a job in a livelihood, that's that's a big deal. It doesn't matter if you're Muslim, Jewish, Christian, like if you don't have a livelihood, that's a big deal. So again, it's such an easy thing in a way. Once people see it, it's like this is only good for a handful of people. And you can't look away. And so again, politicians' phones have to not stop ringing. And this is the time to do it. Aaron Ross Powell So let's return to some of the themes of the AI doc. Three years ago when we talked about the potential benefits of AI, including major scientific breakthroughs and drug discovery and cancer treatments. Researchers are using AI to decode the human genome. Um, you know, I I have just finished a docuseries where a lot of the stuff what AI is doing is really quite promising and also some of it's quite disturbing, right? It's the same thing is the promise and peril are inextricably linked. Um do you think anything has changed that make the breakthroughs worth it? Because I guess if we're all dead, what's the difference if we solve cancer, I guess, right? That's the weird thing about this. It's it's like this devil's bargain, right? I mean, we all want the cancer drug, but if the other side of that trade is like there's no one here, what good was that world? Um, I I think that there are people who are building AI and I mean, you and I both talked to these people, right? And it's not like by the way, I just want to say this is not us against some bad people or the people who work at companies or evil. I think it's all of humanity against a bad outcome. I want to recruit the people building this technology into we don't want an anti-human future. We have to rediscover that we act our humanity and what we're trying to protect here. And I think that, you know, when you talk to one of the CEOs, oftentimes they'll say, Well, I agree we need to stop, we we need to pause, but like give me just like a year more, because if we have one more year, then we're gonna get all these incredible benefits. And they just they really wanna see it. And it's like building a god. Like they wanna see what's behind this veil of uh of illusions. They wanna see what what science and physics could actually bring us if you got the super intelligent AI just figuring it all out. Like imagine if you have a thousand. Because most of these people don't like people. You know, I think I I mean of of the of the CEOs that you talk to, only two of them like people. Yeah. Really like people. I don't I don't think that's wrong. I I think that a lot of these folks there there's this weird point you're making here, which is, you know, how did they grow up? What's their embodied experience of reality? Are they connected to their bodies? Are they connected to their hearts? Do they're connected to the things and joy that they want to protect in the world? Or are they just kind of science geeks who weren't really good at talking to people and really love technology and their best life was like living online? And because they can do it and they have this justification that if I don't do it, the other guy will. So it can't be evil for me to do it, even if it literally leads to the end of humanity. It can't be evil because other people would do it. But this is just like jumping off the cliff because everyone else is doing it. But except you're bringing along everyone else. You are risking everyone else's life for your God play. And this this should be unacceptable. Have you been changed by anything one of them says to you? Any of them? I have yet not. I I Mark Cuban sometimes I'm like, fair point. I'm often saying that to him. Like, that's good for that. That's good. Yes, people should try it and understand it. I still haven't been moved from where I think we're in the same place. They these people don't care about people ultimately. And they have captured government. So that's my twin worries, is that they don't care and they own the government. I think it's just frame control that they focus on a different set of facts. They talk about all the growth that's coming, they talk about the way it's being used, they talk about open claw, they talk about the cool things they've been able to wire up and you would have hated electricity, you would have hated cars. And by the way, I wouldn't have like I the the thing is it's about this is not anti-technology. Like I want people to know this is a center for humane technology, not the center against technology. And you know the word humane, Kara comes from someone that you knew, I think. Uh Aza's father, my co-founder fought his father, was Jeff Raskin. He started the Macintosh Project at Apple. Started the Macintosh Project. I grew up on the Macintosh. I love technology. I love talking on this Mac that I'm on right now. And the idea of Jeff's was he wrote a book called The Humane Interface: that humane technology is respectful of human needs and considerate of human frailties, meaning considerate of the vulnerabilities of the mind. And he built the Macintosh and designed it off of the principle of simplicity that is about making technology more accessible. I think we need humane technology that is humane to the frailties of society that you don't manipulate and extract from children's mental health, you don't race to hack human attachment systems and create delusional mirror neuron activity. You don't create mass, you know, uh loss of livelihoods and people's inability to put food on the table. It's very simple. It's like this is not rocket science. Like, are you building a pro-human future? Are you building an anti-human future? And I really think we can do that if we're crystal clear on where this is currently going. Just to say a couple other notes of optimism, like the social dilemma reached a hundred and fi milliftony people around the world in a hundred and ninety countries. You know, Apple finally uh shipped you know screen time features to billions of phones. They just in the last few weeks they shipped these age gating features. So now the age range is part of phones. So you can start to have, you know, basic children controls. Um, you know, the Anxious Generation was the like a most incredible popular book that's leading to these changes in smartphone-free schools and banning social media in all these countries. We're definitely going to get many more countries, if not all of them, in the next couple of years, doing the social media bans for kids under 16. So there's a lot of momentum and I want to point people at that because I know when you see AI, it can feel demotivating, but this is the time when we all have to get crystal clear and and get going. Yeah. And and we're galvanize people, raise awareness and start conversations about AI and get clarity around these issues. So when you think about the key people that are gonna do this, obviously what I always say when I talk to groups, they're like, who's gonna do this? And I say you . I say that to a lot of parents. I say that to audiences we think it's gotta be you because our our politicians are captive and some of them don't want to be captive, but the money is so massive, like an Amy Klobuchar who's tried time and again to Aaron Powell It is hard, but I mean we AI, I think, though, is more existential than social media. And we it's it's just it the thing that will make the difference is if people actually see it as existential for their lives. Again, go forward like two, three years, um, or maybe you know a couple more years than that, and GDP is coming from AI, not from people. Your voice doesn't matter. Your vote doesn't matter at all anymore. You have no the government has no reason to listen to you. This is the time to lock in political power and actually make this work for people. Like this is literally the moment because this window is going away. So this is not just a normal rally the troops kind of speech. This is a this is the last time that our political voice will actually matter. Politicians' phones should not stop ringing. Um, you know, the midterm elections are coming up. Make this issue known. You know, even David Sachs, he deleted this tweet, but he said A AII,, regular AI would be a wonderful tool for the betterment of humanity, but AGI is a potential successor species. I think these people know that this is a problem. And in the film, even I mentioned that there's this line, you know, we we go talk to people in Silicon Valley and they say like we need guardrails like we need someone to make the guardrails these are the engineers not the the CEOs they say we then they want our help and so we go off to DC and we say we need guardrails and then the DC says well you have to go make us do it because the politics the the public is not there. And also Silicon Valley needs to tell us what the guardrails are. So everyone's pointing the finger at someone else to say that you're responsible for making this change. And the thing that they all agree on is that public pressure is needed. Public pressure is needed As with cigarettes or as with cigarettes, et cetera. So what does that mean? Journalists writing about these Alibaba examples, writing about AI going rogue and doing blackmail, like making this known and creating common knowledge. It's not just knowledge, it's common knowledge. Because I think the thing that Jonathan Heights said recently about social media bans, it was when basically every country knew that every other country knew that actually the people want these social media bans for kids under 16. And and once it's like, oh yeah, we all wanted to do that, but we just didn't know there was enough consensus to do it. And so you have to reveal a hidden common preference to make sure that that happens. So my last question, because we gotta go, is um if you had a happy outcome, twenty years we're living with AI. What is it do ing? Well, um that's a big question . We want AI that is specifically asking how does it enhance a pro-human fut ure. So instead of AI trying to replace teachers, it's AI that's applied to helping teachers be better teachers, deepening the relationships at a human-to-human level, mentorship, apprenticeship, etc. It means making sure that we know which um wisdom and and uh occupations that we need to keep human in the future, meaning if you eliminate all surgeons, if you eliminate all lawyers and then no one ever gets trained from a junior lawyer to a senior lawyer, a junior surgeon to a senior surgeon, we lose all this institutional and generational knowledge. How do you have minimum quotas of this kind of knowledge in the population? How do you have technology that's augmenting and supporting workers, not just trying to replace workers? You know, any technology that's interacting with attention should deepen and strengthen attention, not weaken attention and brain rot attention. You know, instead of hacking human attachment, how do we be augmenting human attachment? Um obviously this is speaking in some abstractions, but the premise is we want a pro-human future with humane technology that's that's aware of the vulnerabilities in society, aware of the paleolithic brains that we are operating with.
This excerpt was generated by Pod-telligence
Listen to On with Kara Swisher in Podtastic
Podcast Listening Magic
All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.