IN
Intelligent Machines (Audio)
TWiT
Dictation Tools and Closing Thoughts
From IM 860: You Gotta Get Computer - Claude Surges to No. 1 — Mar 5, 2026
IM 860: You Gotta Get Computer - Claude Surges to No. 1 — Mar 5, 2026 — starts at 0:00
I'm Jason Heiner filling in for Leo Laporte, and I've got our co-hosts Paris Martineau and Jeff Jarvis. We have a conversation with Dan Patterson, and it's a huge week for AI news. We talk about the anthropic Pentagon Showdown, Clawed becoming the most popular app in the world, and perplexity out open clawing open claw. That's what's coming up next on Intelligent Ma chines . Podcasts you love from people you trust. This is Tw it. You're watching Intelligent Machines episode 860 recorded on March 4th, 2026. You gotta get computer . Hello, it's time for intelligent machines where we cover artificial intelligence, robotics, and all the aspects of the AI revolution, the AI industry. I'm not Leo Laporte. Uh I'm Jason Heiner, editor-in-chief of the Deep View, filling in for the inimitable, irreplaceable Leo Laporte, who's off this week. And of course I'm joined by our regulars. We'll have Paris Martineau uh in a moment um joining us, but I have the ever reliable Jeff Jarvis . What else you need to say? I love it. I love it. Um the uh Jeff Jarvis so distinguished that his intro um needs no intro. Uh it has its own intro, uh, which is outstanding. And uh we also are joining our special guest for this week is Dan Patterson as well. Dan, welcome. It's great to be here and great to see you both. Thanks. Yes. Thank uh thank you. It's always uh always a pleasure. Thank you for making the time. And uh Dan, you and I go way back, so I'm so thrilled that we get the chance to to come together and talk a little bit about some of the stuff that you've been doing, some really um important work, uh some really valuable um things and the stuff that you do, Dan and Blackbird AI, the company that you work for, uh, I feel like it only gets more valuable every day right now, the way that the world is is moving. So um really appreciate you being here. Well likewise, thank you for having me. And both of you do equally important and interesting work uh and really fascinating work. Jeff was just talking about his latest book, which is gonna be pretty mind blowing. And Jason, uh uh thank you. I I think what you're doing at the Deep View is just it's every day I check in on the site and the newsletter and uh it's always innovation . Thanks, Dan. Yeah, it I've d never seen anything like this news cycle. We talk about we're gonna talk about the AI news. I mean, I know we were we were talking about it as we were getting ready for the show and uh it it's unreal. We were even looking at the stories for this week and going like, wait, that happened this week too? Oh yeah, that one was also this week. You know. I mean, it's unreal. So we're gonna get to all of that, of course. Uh and um and Jeff, uh who is a a connoisseur of the news of these things, of course, is gonna be one of uh um one of i'm gonna go to jeff for a number of these stories as well who has great context and also amazing insights on on all of it before we do that before we get to the news uh dan let's talk a little bit about kind of where you're at and and what you're doing. I I'm you're very familiar with the show. You've been coming on the TWIT network for a long time. Uh I think are well very well known to the audience. But just in case there's a few people who don't know, uh why don't you talk a little bit about uh you know what you do at at Blackbird AI? And for also I'll just say for those who don't know, Dan is a longtime journalist. Um he worked for me at publications that I've worked for multiple times. Uh we go back a long way, and Dan is uh you know an incredible investigative reporter and um and news uh news anchor, news person um going back multiple uh generations here now and now working um in the AI space, in the cybersecurity space, in the disinformation or um you know misinformation space. So Dan, talk a little bit about Blackbird for those who don't know. Yeah, it is great to be here. And it it does now feel like multiple generations. I mean, with you both, you like you guys have seen so many different generations. Uh, and right at with with the Twitnag where I feel like there's just been evolution through different generations of tech. Yeah. Uh going back to uh you know, podcasting almost predating social media. So really we've seen a lot of change here and have been pretty fortunate to know you guys both and to be on the network. Uh so I I mean the the uh kind of short version of what we do at Blackbird and right, I I think both of you guys once a journalist, always a journalist. That's a lot of what I do. Like a Marine. Yeah. Right. I I hope we can aspire to more. Um although I've known many great Marines too. So Blackbird protects um I mean the the the line that we like to to share is that we protect organizations, executives, uh, and governments from narrative-based disinformation attacks that can cause operational, uh, financial, of course, physical, sometimes harm. Um so what we do, I think probably many listeners are fairly familiar with the concept of social listening or tag clouds or like categories, you can kind of see or get a sense of what's happening on social media by using these tools that kind of gauge uh conversations on social networks. But of course, we all know that social media is atomized now. It's not just one dominant network. There's many networks, many different chat applications. There's the dark web, which you know, a ton of it's it's old news by this point, but it's still there are a ton of bad actors on there. And um we call it a disinformation attack or a narrative attack because it's almost like we're not just listening. And there's many jokes, you know, we hear for you. We're not just listening or using social listening. Um we're we're tracking the narratives, the conversations, the bad actors uh who will use sometimes automated tools, you know, all of us are familiar with bots, but they'll use now in the age of generative AI and different forms of artificial intelligence, they use uh generative tools or AI tools to amplify a narrative attack, and these can target people or governments or organizations. You know, there's a very famous example of a beer brand a couple years ago, I forget exactly which one. But I think all of us can think about, you know, maybe the experience of being doxxed or um you know the type of media that we can encounter online often is coordinated and it's amplified by bad actors who have agendas. Um and sometimes that is at a tremendous scale. So I think both of you have probably heard the narrative that, well, the you know, AI is just good for slop and it's, you know, we like what good is it doing? It's costing jobs and taking all this energy. And like there's some truth to some of those narratives. But we use artificial intelligence to find narratives because and we call it a narrative because it traces and it moves across different applications from chat apps to social media and different platforms. So we use that to find the actors, the narratives, what they're saying, who they're targeting, and importantly, how they're being amplified, the tools that are being used to amplify these narratives and who they're targeting and why they're targeting different people or governments or organizations. There's some things that you know we definitely can't or won't talk about because they're confidential. Some of our partners are organizations like NATO um and um large governments and um uh representatives or people not representatives, we don't want to get into politics with specifics, but uh you know we talk we we make sure that we are protecting fairly important um actors and organizations uh from this kind of innovative new uh form of attack. Dan is it just attacks or is it also um you know I've argued that journalists have to learn new skills to listen better. Yeah, for sure. Is it also uh do your clients hear things that they before all the internet and everything couldn't have heard before and learn from yeah act on the yeah that's exactly right Jeff it's it is and that's one reason we use a narrative you, know, in this case a narrative attack or a disinformation attack. We use that word although we've we've kind of um you know the words disinformation and misinformation uh don't have a lot of meaning to the general public. Yeah, right. So and especially in the age of uh, you know, there are more specific ways to talk about disinformation like a deep fake. Um but no, Jeff, you're exactly right. You know, we try to use the term narrative fairly often because uh it it really is a there's a story in everything, right? And there's a story being told even in um uh you know one post can tell a large story and the person behind it, again, maybe you know, not ig precisely metadata, but the the idea of metadata, you know, the the person, place, and thing talking about something and the way that they talk or shape or craft a conversation. Communicators kind of inherently understand this, uh, but that is almost as important as what is being said. So yeah, I I think a lot of uh organizations, uh executives and companies are interested in learning the narrative. Uh it's not just social listening, uh, which feels a little dated. It's it is the narrative of what is happening now. Yes, and I want to say we also now have Paris is here. Paris Martinart's. No worries. Investigative journalists. Harris has a boss. You know, sometimes you can't it's unfortunate when you have a job, you can't simply be like, I can't be in this meeting. I must podcast. You have to sit there politely and participate and then frantically message your podcast chat that you're gonna be a little late. But I'm happy to be here. Were you were you twitchy Paris? I imagined you being twitchy. I was definitely twitchy. It's been a strange day. I was like, yeah, absolutely. Um in you and it's the perfect um that that kind of chaos is perfect for the week that we've had in AI, which we'll get to because it has been such a week. What a week in the news, you know, too, right? So um we're glad you're we're glad you're here, Paris. And um Dan Patterson, our our special guest for this week. Uh Dan, I wanted to ask you, I wanted to double click on one of the um the things that you talked about. And your CEO Wasim um Khaled talks about this a lot. Uh he we just had him on the uh our our show, the Deep View Conversations, and he talked about this idea that you you also referenced, which is essentially that um perception itself has become an attack surface. Exactly. And that is something that is really um almost a little bit mind-blowing, but it helps conceptualize the sort of the level of challenge that um that we're dealing with and that Blackbird especially is trying to help um companies, executives, uh high profile um people who are in danger of being doxxed or in danger of um also potentially being physically attacked. You all have signals where you will um as I understand it, and you can double click on this for us, but that uh if you see a certain amount of chatter um you can present uh levels of risk and even I as I understand it different, like lights, you know, red, yellow, green, um or in reverse, you know, green, yellow, red, in terms of the level of risk of someone potentially in your organization being physically attacked based on the the chatter that's out there. So all of that is something we wouldn't have even what was barely, I think, on the radar ten years ago when when Blackbird started this. Um but now you have a lot of clients that that depend on that kind of intelligence uh day in, day out, week in, week out. Maybe you could just talk a little bit more about that. Yeah, Jason, that's exactly right. And what what Seam was ref referencing, uh, and he will go into great detail in that deep view podcast, is right, perception is the attack surface and our own realities, especially when we spend time in these algorithmic silos, that becomes our own reality. And yeah, we did we we have these I mean this is fairly nerdy, but this audience will understand it. We did just release this API and we do have these uh it's called uh constellation and we use that metaphor for a reason because you can kind of see clusters of conversations and right, we do present uh I mean everybody has a dashboard. This is not, I mean, it is a dashboard, but it presents information in a vastly different metaphor and different type of um uh view structure because the information is far more like a narrative and you will see you can kind of see right as those lights go up you can kind of um get a sense of actions that might happen. It is really fascinating because especially when you think about uh a lot of right perception is the attack surface, much like a cyber attack. When you see or when people who work in IT um or work in cybersecurity, you can kind of see different risk signals that happen across your network. When you see similar signals, and again, it's using a metaphor and I'm kind of mixing metaphors, but you can see different signals happen in narratives and then get a very similar sense that an attack is about to happen or one that is in progress could lead to uh physical or other types of harm . You know what's really interesting about this too, Dan, where where I think it really gets to the intelligence aspect of what you do is that it would be really easy for you all to just say, you know, to sort of be the boy who cried woof, like any signals happen, like you could help the company freak out, right? Like, here we are, we're gonna send you the like look out, something bad is gonna happen. But one of the things that you all do is you all also, as I understand it, you will tell companies don't respond to this. There's something that's happening right now, but what we can tell from the patterns is that some of this is bot traffic, some of this is not actual, you know, people or the number of people are the one that are involved are you know have an alternate view. And you all will give companies advice where you'll say, do not engage because if you engage, you will potentially amplify this to a level where it could become a you could increase the risk. And so you will so it's not just always telling people that they should be freaked out. Sometimes it's telling people this is not a bo this is not worth getting um you know rolling in the mud on. You should let this go, let it play out. And our our intelligence tells us that this is likely to just play itself out quietly over a period of the next sort of twenty four to forty eight hours or whatever the case. Am I am I characterizing that correctly? Yeah, for sure. Although I think that w maybe w we might do that on a macro level. And that's kind of just good comm strategy. Every journalist knows that. Like just don't be the trolls. Don't get involved. Uh and I I think that on a macro level we probably don't advise companies on how to respond, but we will give them the tools that allow their teams to make better response decisions so they can not just make better decisions, but make those decisions faster. Because as everybody here knows th sometimes this happens very quickly. I don't know if any of you have had this experience. I had, you know, years ago I was covering stories that sometimes were pretty uh prone to pick up uh different types of bad actors and sometimes it would happen very fast and they can find out a lot of information about you, your family, your friends, where you work, what you do. And I just remember from personal experience that happened within seconds. And so we probably advise companies to pay attention to certain risk signals or look out for types of behavior, as opposed to like do this in this particular instance, because everybody and every organization is differ ent. But again, the my advice is always don't just what you said, Jason, don't respond. Don't get into it. What about cases where I mean the attack scenario is somebody comes after you. They don't like you. They think you're vulnerable. There's various scenarios. Wha I'm looking at my favorite story of the week, and it's not AI and it's not anthropic. It's McDonald's CEO eating the new Archburger. Did you all see that? I didn't know. Oh. Oh, it's brilliant. It's a uh a a man takes a bite of a burger in a way that makes And you know he had multiple takes because the number of fries in the fry uh container went up and down. Maybe just went on. So so I I I just want to get that in there because I thought it was so funny. And Burger King came along and said and and the CEO of Burger King took a monster whopper bite of his whopper, uh, which he's reeduced. But my point is, finally, there was no one in that room at McDonald's had the courage, obviously, to say, um boss, I I I don't think you want to do this. I think something's gonna happen here. This is self-inflicted damage. But they didn't have there was a management issue there in terms of not understanding uh how to tell the boss something. But there was the larger question of saying, what are you gonna say about the company in this case? What narrative are you creating? Um uh how does that that dynamic work in when it's self inflicted How much are you in position of kind of educating them about their own companies and their own selves? Well uh you know uh we don't have to say anything to a company, but what we do is uh kind of a spectrum of we provide a spectrum of tools and technologies. On the one hand, like I was talking about earlier, we definitely just released an API that's hyper nerdy. Like the engineers are going to understand the API. But we also, Jeff, we we have this uh technology called Raven Recon, which is easy to understand and easy to use. And anybody from uh an engineer to an executive uh can understand this tool. And that is kind of built for we call it recon because it's built for uh finding information that is happening to or about individuals. So even without listening to your own comms team, even though in this case they might be sitting there cringing, um, you could give it to an executive and have them. Uh in fact, my phone's going off with a likely scam right now. You could give it to an executive and they could easily understand that, okay, these narratives are happening about you right now. You know. Make whatever decision you want to make, but here are the risk signals. Here is what's happening. And again, because there are um more technical capabilities with the technology, with or it's called our constellation platform like I referenced earlier because it is like stars in the sky, it touches a lot of different points. Um you can then say, hey engineering team, let's learn a lot more about these narratives, who's pushing them, are these anomalous or they bots or these actual humans saying actual human things? Which can give you a lot of information, you know, if it's if it's bots uh or if it's real people reacting again, Jeff, in that scenario, anybody can react to that and say, Okay, I need to learn more about what's happening. I see the risk signals accelerat ing. Yeah, you know, Dan, that's probably one of the reasons why a lot of your early customers were a lot of like crisis comms uh and uh organizations that were were dealing with where they had some kind of crisis uh and they wanted to figure out how can we manage this, how can we be smarter about you know understanding it and really stay on top of it. And like I said, that there's a there's a level of intelligence that that your company provides that was just not even on the radar, you know, a decade ago. And now uh you know you help companies be a lot smarter uh you know about this area. Since then you you mentioned this at uh you know at the top too. You know, you've also started to engage other clients, um, nation states, uh, NATO, um others. And so can you talk a little bit about that, like the the evolution, how the evolution of your um uh of the companies that are coming and asking for you know your services and the intelligence that you all are are offering and how that's kind of changed and evolved the both the mission of the company and and maybe the the tools that you are and the tool set that you have to offer. Yeah, right. I mean it really is about making more strategic, uh faster and intelligent decisions um and enhancing those capabilities. You know, I've been with Blackbird just under three years. And our CEO who Wasim, who you spoke with, and our CTO, Nishad, they started working on these problems about a decade ago. Uh back in the era where I I'm sure you all are familiar with the term fake news when that was kind of the term de jour about what was happening in the media ecosystem and the social media ecosystem. And I, you know, they have also been working on, along with some of our other engineers, been working with artificial intelligence long before it was fashionable. And our technologies kind of advanced as those, again, I mean, no pun intended, as those narratives and as those ideas advanced, right? We went from kind of an unsophisticated uh concept that we had the word fake news for, sure which really didn't do a good job of explaining the phenomenon. Uh and their technologies kind of looked at okay, here is a good use case, maybe it is crisis comes because we can kind of figure out uh using AI or at the time probably they're using machine learning and other uh technologies. And um now, you know, as we advanced through maybe the crisis comes era and I know that we had APIs and we had different ways of tapping into the data, um, by the time I came on, we developed this tool called Compass. And we still use this. This is the only consumer I mean any it if you have technical abilities, you can use our tools, but any consumer can use um compass.blackbird.ai. And this is uh you know we don't actively promote this can to consumers, but it's very easy to understand. You do have to create a login and that's mostly to prevent spam and other junk. Um but any it will check any claim that you see online. It you know often if you're scrolling social media you'll see a lot of very uh confident claims. Uh and you'll you'll see something that could be disinformation, it could be um accurate, or it could be intentionally accurate information, it could be intentionally or unintentionally misleading information. Often we share stuff, we don't mean to, but share stuff that is misleading, misinformation. And you can put anything, literally anything, post a link to something, post just type something in there. I saw so and so talking about such and such. Um and it will not just give you a yes-no answer, it will give you the context with links to where you can learn a lot more about this, and it will do it fairly quickly, a paragraph or two. Uh we have a a fast version that will give you a sentence or two, but the longer version will give you good context. Now we've built it out so kind of to answer your question, Jason, about the trajectory it will it will check videos, it will check uh photos and images so you can tell was this a deep fake or was this a manipulated, you know, a a cheap fake? Was this something that was uh manipulated to m to advance a narrative. So those technologies and tools I think kind of help us look into the future. Like I said, we built this about two and a half, three years ago when I joined the company. Uh but now, you know, with this new API and recon, it really does take things that are on the one hand very technical, but on the other hand, very, you know, for executives or individuals pretty easy to understand. It does require a technical deployment, but once it's deployed, it's easy to use and understand and can allow you to make very fast strategic decisions that help you I mean make better decisions faster and and in theory stay safe or whether you're in comms or governments um or an executive in organization and individual make decisions that are better or better informed. Aaron Powell How do you make sure that tools like that uh themselves aren't unduly influenced by disinformation or kind of deep fakes or just the I guess uh the general um low-quality nature that much of our information ecosystem has taken on, kind of especially in this age of AI. That's a great question. Very generous. Good idea. Yeah, I mean that's very interesting as well, right? So I think if I understand your question, Paris is like if something is or if the tools are dependent on the ecosystem and the ecosystem itself is being manipulated, how do you make sure the tool then isn't manipulated? Yeah. Yeah, right. So that is again where and it it is also where like I don't have the engineering chops to tell you technically how it works but it is why we have engineers who really do uh you know we don't have like here's one whitelist of sources and we make sure this is a good pure list of sources that will always tell you the truth. It is pretty dynamic and robust. We don't just look at all the social networks or all of the news websites. Um I said this a little bit before you joined, but we will look at chat applications, the dark web, uh the entire information ecosystem. We have a pretty good understanding of what's happening and because we are full of experts who are building systems that can look for this, we do see the the actors that are pushing uh manipulated narratives and we see the behaviors and so we also understand the trends the tactics the techniques there are very technical words for this but working with some partners again like NATO, these aren't using understanding these tactics is not a new practice. And so they they will inform our engineers about the signals and the types of behaviors and um the platforms on which uh uh information is manipulated. And so again, I'm not an engineer, but I I know that we take some of those signals, many of those signals, and we build those into the system so we have a better understanding and are not manipulated oursel ves. Dan, so um want to be respectful of your time. One last question I'm thinking about. So this compass.blackbird.ai, this is a great resource for you know everyone in the in the audience to be able to use uh if they have questions about the veracity of a of an image, of a report, of a video, you know, uh is it a deep fake, is it uh manipulated, all of those things. How are is your company, as I understand it, uh how is the could you talk a little bit about how the company itself is using AI? How is it using, you know, AI? You know, are you developing your own models for for for that tool? Um also when when I talked to Wasim, one of the things he mentioned that sort of, you know, um scared me straight a little bit was he was saying that if you're not using AI AI, if you're a leader and you're not trying working and thinking about AI agents right now, and you're really just still using chatbots, you're already behind. Like you need to really be thinking about what are the ways that agents can transform um you know your organization, um, the ways that you work, the ways that you operate, uh all of those things. And so uh I thought this would be a great opportunity to talk a little bit about the ways you all as a company, even though you you are and have been an AI company before it was fashionable, as you said, uh, you know, AI itself as a tool is is changing some of what you do and the ways that you do it. Yeah? Yeah, for sure. So we do build and train our own models. And you know the second part of your question, Jason, you know, we seem probably spent quite a bit of time articulating what executives uh and decision makers should be doing when it comes to agents. But I think that the reality is many of these tools are so accessible that they're being used by your your teams and they are transforming the business. So above you and below you they're transforming business. And it is I I don't want to speak for him, but I my guess is that you would uh you would just have to have the vocabulary and the capability to use these tools that are advancing so much more rapidly than almost any other technology we've seen in prior to be able to manage and to lead teams to make good strategic decisions for your own companies and know the difference between homegrown and home built networks and what you can have with your own uh generated systems or or uh trained systems versus uh and make the decisions on you know maybe kind of the old technology uh challenge. Do we build it? Do we buy it? Do we integrate something? Uh and I think that these things are happening so rapidly that uh decision makers and executives must have the same vocabulary as the rest of their team and their their clients, their partners, and the rest of the players in the ecosystem. So great insight. And that's one of the things that you know podcasts like the Intelligent Machines are are trying to do, help people have that understanding, have that uh knowledge and awareness of of how these tools are advancing so that they can uh work with them, learn about them and, and be able to sort of lead from the front as it were in their companies. Dan Patterson, thank you so much for for being here. Always a pleasure. Thank you for the important work that you and Blackbird are doing, you know, really providing people with tools that were not possible before and uh are making um the world safer, uh, are making us you know smarter about the um level of threat and risks that are out there and uh and also just a pleasure to have you you are one of the the best you know people in this industry one of my favorite humans and so uh such a uh such an honor always to to be with you. You too, Jason. Uh and Jeff Paris and Benito. It's uh it's great to to with you all. I really appreciate being able to talk about this stuff. Thanks. Take care. Amazing. Take care, Dan. Take care. We'll see you. Dr. Jas on. Okay. All right. Dan Patterson. Um, what a powerhouse. I mean, that um that stuff that they're doing, you know, I I just couldn't even have uh have conceived of some of those things you know even five or ten years ago, you know, and the fact to be able to rapidly accelerating. It is accelerating, right? Like the ability to do the things that they're talking about, right? It's empowering threat actors in ways that we never could have um, you know, I guess we could have anticipated science fiction has anticipated some of it, but it's at a level and a speed that's just out of out of this world right now. I mean that's kind of a chicken and egg thing though, right? Because science fiction is also the responsible for some of this stu ff. I was about to say a lot of the people doing these sort of things are directly influenced by science fic tion true chicken and egg chicken and egg chicken and egg indeed well uh Paris great to see you um and Paris and Jeff thanks for for letting me you, know sit, in the Leo seat for for this week enjoying. We get in trouble when we do it. We we do lap dies to say Jeff and I are on hiatus because we we bring up uh too many spicy stories when we have to do it I'm here to take all the bullets this week. Thank you so much. Thank you. Everything that's great I will tell 'em were was all your guys' idea. You know, all of the mistakes were mine. So now you're speaking our language. Excellent. Excellent. We we have so much to cover. I I want to start a big week in AM. Oh my gosh. Oh my gosh. I want to start money. I was gonna say we have a lot of ads this week. Let's let's pause and send it over to Leo to talk about one of the sponsors for this week's sho w. This episode of Intelligent Machines brought to you by Delete Me. If you have ever wondered how much of your personal data is out there on the internet for anyone to see, please do me a favor, don't look. Because it's more than you think. It's appalling. Your name, your contact info, your social security number in many cases, your home address, even information about your family members, all being completely legally, I might add, compiled by data brokers, and they are completely legally selling it online to anyone , even foreign governments, marketers, law enforcement, anyone, hackers. Aoneny on the web can buy your private details, and that can mean the worst identity theft, phishing attempts, doxing, harassment. But there is a way to protect your privacy with delete me. Look, I I live online and I know how important this is. In fact, we use delete me. I think every company should use it for their management because we were getting fished. People were able to find out all sorts of information about our team and use it to send very credible phishing texts trying to rip us off . We immediately signed up for DeleteMe and we've been using it for years. It really works. It really works. That's why I recommend Delete Me. Why that's why we use Delete Me. And it's why you should use Delete Me. It's a subscription service. Now that's important because it doesn't just do it once. It removes your personal info from hundreds of data brokers. This is the key. There are more than 500 data brokers, and there's more every single day. So what you do, you go to delete me, you sign up. By the way, it's joindeleteme.com. Make sure you use the right address, joindeleteme.com. You sign up, you provide them with exactly the information you want deleted. They need to know what you don't want, and that way they don't delete stuff you do want. They take it from there. Their experts know exactly where to go and how to delete it. They'll send you regular personalized privacy reports. We just got one the other day showing what info they found, where they found it, what they removed, so you know what they're doing. And it this is important. It's not a one-time service. Delete me is always working for you, constantly monitoring and removing the personal information you don't want on the internet. And you need that because these data brokers are not the nicest people, and they're constantly rebuilding those dossiers even after you have them deleted, they have to delete them by law, but nothing stops them from recreating them. Plus, there's new ones all the time. In fact, the the sleaziest thing they do often is change the business name so they could start over with a clean slate and all your inform ation. To put it simply, Delete Me does the hard work of wiping you and your family's personal information from data broker websites, and no one does it better. Take control of your data, keep your private life private, sign up for delete me we've got a special discount just for you today you get twenty percent off your delete me plan when you go and this is important get the right site to joindeleteme.com slash twit use the promo code TWIT at checkout. The only way to get twenty percent off is to go to joindeleteme.com slash twit and enter the code TWIT at checkout. Joindeleteme.com slash twit. Use the promo code TWIT. If you just Google Delete Me, you'll go to the wrong place. There's another company in the EU and yet they don't do the same thing. You want to go to this one. It's joindeleteme.com slash twit. Don't forget that offer code TW IT. Now back to intelligent mach ines. Okay, well , we have to talk about this weekend. Because yeah. I I was saying it's very rare that all of my conversations with normal people who don't care about AI begin with, oh my God, the AI news. And this was one of those weeks. I I certainly in AI, it's the most consequential weekend news weekend I've ever seen. But I almost think even in terms of tech, I don't know that I've ever seen a weekend like this where, you know, tech was the story, m the the story, even when something as consequential as as the US invading another country um was that's almost almost coincidental but oh and by the way we also invaded a country or or bombed a country uh and we use Claude for that and we use Claude for it right so so the whole anthropic open AI saga here would be huge on its own added war or two. Just for context for anyone who doesn't know what we're talking about, on Friday, Trump directed every federal agency to immediately cease use of all anthropic technology. This was the culmination of a uh simmering brew haha between um anthropic and the uh Department of Defense. In part uh we spoke about this last week. It's this kind of paradoxical thing where uh Pete Heggseth is simultaneously designated anthropic a supply chain risk to national security, and they also used anthropic uh and clawed in particular as part of their operations um to enact war in Iran. Yes. Yes. It the number of aspects of this to unpack are are so many. Um so we discussed this a bit last week where I I think the Leo's starting point was um similar to uh strategies this week. Kind of the government has to decide uh how to use these tools. Uh I I disagreed and I think that there's I disagree with strategy as well. I think there is a um a need for, especially in unusual times, shall we try to be not not too political and call this unusual times, uh a need to uh speak one's conscience and decide what's used and what's not. The analogy I make is that certain pharma companies will not sell certain drugs to certain states if they're used in um executions. Yep. Yep. And so companies have some rights there and have that ability. So Anthropocame along and said you can't use our stuff to autonomously kill people, and you can't use our stuff to surveil Americans. And there was a moral aspect of that, but there was also a practical aspect of that. Like, this stuff ain't ready. Yeah. You don't want to use it for that. You know, I I wouldn't trust it to go kill people. What do you what are you doing? Um then the hex S stuff got all macho chest beating. Yes. And then Trump got all macho chest beating, as Paris recounted, they're out. And it's worth noting that this uh like designation, the Pentagon, uh as of when I checked today, the Pentagon has not formally issued this supply chain risk designation threat through any official channels. Mm all this messaging has been on social media, as is kind of the norm in this administration, which is really adds an unusual aspect to all of it. What we've heard so far is um the Washington Post also reported this weekend that a like hypothetical around nuclear ballistics uh might Might have been kind of what, for lack of a better term, blew this whole thing up. The Washington Post reported that uh Emil Michael posted an a posed an extreme hypothetical during a meeting in January twenty twenty six. If an intercontinental ballistic missile was launched to the US, could the military use Claude to help shoot it down? And the accounts as to what happened next diverge sharply? The Pentagon's version is that Anthropic responded, you could call us and we'd work it out. And officials were really mad at that, because they were like, that is ridiculous. Anthropic's version is they say that's totally false. We said we've always agreed to allow Claude for missile defense. Wasn't part of the red lines as they call them. Anthropic says the red lines are two specific categories, mass domestic surveillance and fully autonomous weapons. Right. So that was crazy enough as a story, right? Right there, that has huge implications Is the government going to destroy anthropic? How far could this um ban go? Uh a friend of mine at the University of Virginia said, do we have to stop using it because the university gets grants from the federal government? That''ss that that's her shattering questions. Yeah and then along comes Sam Altman. Y es. Who um uh comes in and uh uh says okay I'll do it and apparently was doing this all along and um at various points supposedly had his own uh rules, agreed with anthropics rules but really didn't because otherwise they wouldn't have done it. Then he admitted that it was uh uh opportunistic and sloppy. Then he whined to his staff that uh this was really painful. Uh give me a break. And so um uh that's added a whole nother layer here to where this goes. I've got to ask you both because you're younger than I am, which is not hard to be. Does the name Eddie Haskell mean anything to you? No. No. Oh, Paris, Paris. But I also couldn't remember most names that I've heard. Well, no, this is this is a this is an old old guy TV reference. This is uh Leave It to Beaver. Oh. I don't leave It to Beaver. Well then you should know Eddie Haskell. Eddie Haskell was the uh the friend of of Wally's who was the two faced slimy ass kisser. Oh, Mrs. Cleaver, you look absolutely lovely today. And um that's what and I I did I I got AI to do a gift for me of of Sam Altman meeting Eddie Haskell, but it doesn't mean anything to you, so I'll I'll not even bother showing it. But Sam Altman proves himself to be a two faced traitorous ass kisser to the government. sort of behavior that led uh to uh Sam Altman's original ouster from opening. Exactly. So now what happens? So now it gets more interesting because he whines about well the other damaging to the brand, blah da uh there's a movement to to delete chat gt uh anthropic leads goes to the top of the uh downloads and and the uh uh uh google play store also it it is skyrocketed as the number one app in the world, passing chat GPT, which had been the number one app in the world for you know three years. Anecdotally, um, as uh listeners of this podcast will know, but uh Jason Percondic,s I'm in a lot of um subreddits for all the various models. And I really enjoy being in the um open AI ones, in part because up until recently my main source of joy was whenever uh OpenAI would depreciate a model, people would freak out because they were gonna lose their girlfriend. Now all of those people are aggressively organizing to switch to Anthropocre angry. There have been hundreds of things. Paris, let me understand. These were adamant ChatGPT fans. These were ChatGPT fans so adamantly fan that they're like attuned to different models, making uh lengthy hundred word plus posts whenever a a mo a brief change happens to a model or some sort of tweak is made to the system. And these people are not only switching on mass to and it overwhelmingly seems like Claude, but they are really relishing in the experience of not being in chat g the chat GPT ecosystem. I mean maybe this is just the anecdotal experience that I'm seeing, but I've seen probab ly twenty to fifty posts in the last day or two not looking for it of people being like, Wow, I like Claude so much more or wow. I think I've seen one or two that said they liked John Clay So here's my question but mostly when did Sam Altman do permanent damage? Did he shoot his company in the fo ot? Or does this go past? I mean it's there's of two minds. One, ChatGPT is in a position now where they are the dominant market leader. They have such a a head start isn't even the way to describe it. They have so much more market penetration than any of the other companies. And significantly, significantly more. Both they and uh Gemini have so much more penetration than anthropic. So the it's hard to compare the two. But one consequence of that is when you're overexposed, you are increasingly likely to end up getting kind of a negative to have your brand reputation be tarnished. And I do think that there's been a number of instances that have increasingly tarnished the chat GPT and OpenAI brand, starting with everything going on with the sycophancy, the uh suicidal impulse was a common complaint I see in all these forums is that as a counter to the kind of sycophancy and uh AI psychosis inducing tendencies of these models. Now like uh you'll be asking chat GBT for help sorting through some emails and you'll be like, you're not crazy. You're not broken. Take a deep breath and we'll work through this together. And people are like, What the heck? I'm just asking for my emails to be so I do think so I think that this exists in that context. I think that this is a huge sticking point for like I was out to dinner last night with a friend that is not plugged into AI stuff at all. I'd say an a an anti-AI person in every sense of the word. She was like, Oh yeah. What do you think, Jason? Yes, so I I think that it reminds me a little bit of the um Instagram. There was like um the the uh Uber, the boycott Uber movement, you know, um, because of all the things that came out about them, you know, uh, they changed CEOs. Um they did it caused them to change CE CEOs for sure. Um there was like the Instagram one um similar to that a boycott of Instagram it does remind me a little bit of those for sure and they were they both were consequential and I think that they were brand moments where they were where there was brand damage done. Um they did both recover. Uh and so I expect this to be a bit like that. Um and here here's why I'm thinking so. And then I uh there's a couple parts of it I'd I'd love to unpack with with you all and get your thinking on too. With so I think that what open I still has going for it is with the level of talent that they have at the company, they are also making these tools like the easiest to use. I think by and large. Even lately, I've uh heard some programmers talking about the fact that uh developers that they're using codecs instead of clawed code because they're like, it's actually gotten better over the last few weeks. And so um that was some kind of surprised me because clawed code has been really had has just had it figured out right among developers for a while. But it but in Monk ChatGPT itself, like some of its controls, um user customizations, things like that are just a little bit better. I think their browser also is is is uh is just a little bit easier to use. So I I people often go to the path of least resistance as we all know, human beings. And so I expect as long as they don't have and there's the risk of this, some people have left open AI kind of publicly, right? Even in the past sort of week because over this employee sentiment is a huge aspect of this. The employee, if they start to lose and they lost, I there they've lost several important people over the last few days. Um if they if that exodus becomes more acute, then I'm gonna be more worried. But I do think right now they still have the people and the staff and the this mission of making these tools better and easier and faster to the fact that I think that they will likely still own a lot of the consumer sentiment of this. And they will continue to, this will be a little bit of a blip. I do have some bigger questions though too, and I'd I'd love to to to get your all's thoughts on this. The the Sam Altman thing, I do think that, you know, Altman's uh he is very much almost like the mirror image of of Dario Amadai. You know Dario is um he's very uh and my sense has always has been this way.'s He very sort of um almost like single-mindedness and like steadfast on an idea, right? Like that he pushes forward. And I want to talk about one of those ideas for a whereas Altman, I think Altman takes in a lot of things um and and reads sort of the tea leaves and then sort of makes some decisions of like um you know which ways which way things are going and let's let's sort of listen to our uh audience listen listen to is he is it unfair to say he's a little Trump like in that he's impulsive? I think there are. I mean he will act very quickly right on the side that I would even consider this move impulsive. I think this was calculated and has I assume that Sam Altman, OpenAI, Google, I'm sure even Meta are all salivat were salivating at the idea of getting anthropic contract. $200 million contract that you're but comes with a lot of uh not just baggage but bombs. Well none of them care because I mean Google rolled back its internal probation Well that that's what I think is the most interesting part here. I think that yes, all these like consumer reputational uh blights that will continue to exist. But the real place where this could actually make a market difference is in employee, like in the employee talent wars that these AI companies are doing. And this is something that Karen Howe got into in her book in Empire of AI, which is that all of these companies are paying their employees crazy amounts of money. Um they kind of have the pick of the litter in the sense, and it's been a real struggle for all of them to figure out, hey, how can we attract the best possible talent that can give us the edge? A lot of these people were attracted to these companies by making them making like lofty promises about ethics, doing the right thing, building a technology that's going to change the world. And you're starting to see that in the way that employees of OpenAI and Google are really reacting negatively to this and being like, hey, why aren't we standing up to these absurd uh demands like anthropic is? And I think that that's the sort of thing that people are going to listen to when they want to make their employment decision the decisions about where to work. If Anthropic comes to any of those people that have signed this now Anthropic has a a literal list of a thousand employees that they could possibly swoop from these. So here's the here's the thing that is is interesting, I think, for us to consider. If we consider the two those two red lines. I'd I'll if we go back to that for a quick second. Because there's there's been a lot of and you mentioned the strategy piece. Um there's also Altman who who tweeted out and he said, I don't feel like the executives of private companies um should be making decisions in a democracy that, you know, it they should be made by elected officials, not non-elected executives of private companies, you know, to a degree uh uh Ben Thompson and Stratekari unpacks that and essentially is saying much the the same thing. But here's what here here's my question about that is and and you all tell me if you understand this the same way. I don't feel that Dario and Anthropic is necessarily saying this is wrong and nobody should do it. They're saying we are not comfortable with it, as Jeff, you said it the like we know this technology, we know the reliability of it, we know that the challenges of it, and we are not comfortable with this technology being ready to be given the you, know uh, weapon where it can choose which humans should be, you know, taken out, that that there should always be a human in the loop on that. That seems like a pretty reasonable um uh you know ask. And then the other mass surveillance, which is illegal. Um, yes, what the what the government has said, and the and the very important part in the contract was like for essentially we can use this technology, we want to be able to use this technology, and as I understand that, this is standard language in all Pentagon contracts for all legal purposes. And what they said verbally was: we don't have any intention to use your technology for mass um uh you know for for killing of of humans autonomously and or for mass uh surveillance of workers. And yet we all we have to go to the line we're not gonna um allow any carve-outs of language saying we we won't agree to those things specifically because they are illegal and all things are you know that are are are illegal are part of the contract. And the anthropic said, like, we're not comfortable with that, we want that carved out because those are things, these are areas where if humanity like we've seen this movie, right? Where when when robots and AI can kill humans without any human intervention, the potential consequences are very, very negative and we uh we believe that we don't want the technology we've built um to be a part of that because you know our feeling is the the consequences are very negative, and we don't want to do that. That to me seemed like a very reasonable consequence, and I feel like it has gotten mischaracterized as them moralizing to the government and trying to tell the government what to do. Is that am I understanding it correctly? What do you think? Yeah, I mean this is what we discussed last week too. But but I and and and Ben Ben who does great analysis, though I will confess I go to I always go to Gemini and ask him to summarize him first because he's so long. Um uh Ben argues strenuously that you can't have companies deciding what's what. You've got to have gu elected officials deciding uh how to use these tools. Otherwise you'd have a a dictatorship of companies. Well there's a few issues here. One is we are in excellent circumstances with the government we have. And um individual responsibility and accountability will matter. And so, um, just as the six members of Congress who did the videos say reminding members of the military that they should not follow a legal orders , uh because they are ultimately responsible under the laws uh the precedent established at Nuremberg. Pardon me, I'm quote Godwin's law here, but I'm gonna end up going there a little bit for a minute, sorry. Um uh it's gonna get worse for a second, then it'll get better. Um so so there's there's there's there's responsibility for the military person. Is there not also responsibility for the company? As I mentioned earlier, pharma companies choose not to sell their drugs to states that are going to use them in executions. E. G. Foderbin, the manufacturer of Zyklon B, uh, has been held responsible by history and others for selling that poison to the Nazis in the concentration camps. And so we hold we say to that company to this backing of that exact comparison, Jeff Hobby. You shouldn't let it up. You are held responsible for that. You are accountable for that. And so uh uh the treachetry is worried Ben's worried about the dictatorship of the company. Okay, I get that, but I'm worried about the dictatorship of the government. And worried about the dictatorship of a dictatorship. Exactly. And if you're going off and doing things, what uh and and we constantly are saying to the AI companies, you need to be careful about how your stuff is used. You need to put in guardrails, though I think that's impossible but we still say that people say that right you you are ultimately responsible we say the same thing to social media companies we say that in all these companies you are responsible you are accountable you have to be uh uh uh moral in in these decisions. Well, okay, so so Anthropa comes along and says, yeah, we have a moral line and here it is. And we don't want our stuff used in this way. And then they're being accused by people like Ben of trying to be dictators. No. They're trying to be accountable and responsible in a in again exiting times uh where the risk is very high that their technology could be used in a way that would uh and the least shame them in history . So I think that they've got an opportunity and a need and a responsibility and a right to say no. So it's a really interesting issue here of where you go. And then if we go to Google, I think Google's right now trying to hide. Like just forget us for a while. We're not ready. It's very interesting. Google and Meta. And and Microsoft, I think, and Amazon, right? They're all well' Amazons well we'll get that in a minute. But they're all kinda trying to hide. But Google now has employees rising up again as they did in the robot days, that that is the days when Google had a robot company, uh saying, uh, don't you don't use this stuff for war at all. Yeah. And so where do these other tech companies go for all the reasons that you mentioned, Jason, but also for their moral and legal responsibility to themselves and their legaci es. I also want to point out Lieutenant General Jack Shonahane, who was the inaugural director of the DOD's Joint Artificial Intelligence Center, the guy who led Project Maven, the Pentagon AI program, that famously caused that like Google employee revolt in 2018. He uh waited on the subject this weekend as well. And he said, painting a bullseye on anthropic garners spicy headlines, but everyone loses in the end. He called anthropic's red lines reasonable and said quote No LM anywhere in its current form should be considered for use in a fully lethal autonomous weapon system. It's ludicrous to even suggest it. I I think that speaks for itself, you know? I don't think that I think I'm just I'm baffled as to how the situation got to the point Because you're dealing with some nut jobs. I mean yes. I mean Heg Seth is at the same time he went to the Boy Scouts and said, okay, you can keep girls for a little while, but you have to get rid of all the DEI, so I'm sorry I,'m gonna do it again here. I'm gonna go to the same place. I'm going to the normal. So it's now the Hegseck Jug end, right? And they're they're dictating to the Boy Scouts. Um, what do you do in that case? You've got macho, it's a macho thing, and you don't dare disagree with me. I'm gonna get you, I'm gonna destroy you. And they have the mechanisms to do so. Yeah. unexpected maybe was that they did this and they told them now to to your point, Paris, they haven't actually executed on the the promise, which is to make them you know a supply chain risk, which is normally reserved for foreign adversaries. So it was definitely a DEF CON 5 move. It's like we're going straight to DEF CON 5. Like we're gonna, you know, uh we're gonna term you an adversary you know to the um you know to the republic and there was some concern I think that was that okay that could really damage anthropic right,? That could have some kill them almost, couldn't it? It could it could kill the company, right? Um or at least it could it could make a lot of people uncertain about whether they could still do business with them. Yes. And they are largely, for those who who aren't familiar, I'm sure most of our audience is, they make most of their money on their on their API, on their enterprise business, companies paying to use their services. So if a bunch of those companies have questions about whether it's legal for them to use it, that causes a lot of problems. And then something happened that we didn't expect, which is that American citizens and maybe people around the world were so inspired by the stance of what they did, of this sort of principled stance that they had taken, that they started downloading Claude. Claude has been most people, if you say Claude, they don't even know it's a chat box until last week. Yeah. It's like this very pictures. Yeah. What does this go for? What it I mean, nobody really knew what Claude was, not newbody, obviously. It had its fans. A lot of the tech narrati, like the people who are really into AI, a lot of them for for years have um have or not m years for months have been saying because this is us this industry it feels like years but months um have been saying like Claude is the best and I I've seen this and probably you all have seen it with a lot of you know people you know who are really deep into the AI ecosystem and use a lot have been using Claude because they're like there's just parts of it that are a lot better and more accurate and less hallucinations and safer and all that. So fine. But the broad um the broad spectrum of people did not know what Claude was until this weekend. And all of a sudden it went from like it was like 40th to 100th on a lot of people. Jimmy Kimmel of AI Shot to exactly the top, number one, um within hours essentially after the the word came out on Friday. Down this week and experience well the it was in part twice in part because a surge in downloads and usage, but also I think uh a data center got hit by a missile. Uh so you know, a complicated weekend for good old. Very complicated. So Claude rises to the top. And I don't know about you all, but I'll even share an experience I had with. I have a friend, does not work in tech, works in nonprofits, is an educator, very smart person, um, wonderful friend of mine, who came to me on Saturday afternoon and he had downloaded an AI chatbot for the first time and he downloaded Claude and he was having to do two things. He was having to sort of make a a s like a social speech and then he was working on this presentation that he needs to give for uh for in an academic forum and he told me he's like I have to show you this um he's like this one I asked it to sort of make the the basics of a of a of a speech for me and he said I have to tell you. And then the other thing he said, and the other thing is I helped it think, helped it have it helped me think through the arguments that I need to make in this presentation. And he said, and I have to tell you, and I'm going to show them to you. He said, I'm both really upset and incredibly impressed. He said, I'm upset at how good it is, and I'm um uh also impressed that it could help me you know think through some of these things in such a in such a powerful way so he showed me them and uh and and he had done a a great job with these things and I just was uh I was so blown away that the first place he went was Claude because he had seen all this. And he really he he subscribes to our newsletter, but he only does it, I think, because you know, because since I started there in December, um and has been keeping up with it. But um that told me that the that Claude has truly broken through the mainstream. Like it is it is skyrocketed into the level of attention and uh and not only that but usage in a way that um really is giving it a moment that I do wonder and to your old question you know is it durable is it is it are they s truly going to um you know be this counterpoint uh is it a cultural moment but I I think we have to acknowledge that we kinda in the at least in this sort of three year AI boom that we're in, we haven't seen anything like this before something, you know, come out of nowhere and go from the U.S. I had a very similar experience this weekend. A friend of mine, um who's always been I guess a big Gemini head, but has kind of like waxed and waned, like there's maybe a time where he was using it more, but that' wastn that often he mentioned to me this week and he was like yeah you know I downloaded Claude and I'm just like playing around with it because thought and by Monday he was like Claude's helped me re-alap uh reevaluate my whole like current life professional progress. I've vibe coded a CRM for my custom outreach that I'm gonna be doing this week. I've I open he I literally just got a text five minutes ago. It was talking about some sort of camera that he wants to buy. He's like, I've got to ask my new best friend, Claude Opus. I'm like, good for you, bro, I guess . So the other I didn't put this in the rundown, but NVIDIA uh Jensen Wong said uh they're he they're gonna invest the thirty million in um open AI, but he said that's probably it for both OpenAI and Anthropic. And and what he hid behind was because they're likely to do uh uh IPOs. Yes . So that becomes another interesting wrinkle in this is they're both headed that way, I think. But I think that uh open AI has just got delay ed. Yeah, I think you're right. The anthropics might have just gotten accelerated . Yes. Sentiment. You know, the the markets are based on sentiment, you know, uh in addition to earnings, but really the sentiment and the sentiment on both of these companies just shifted so dramatically, you know, in the last seventy, you know, or the 72 hours over the weekend that it it really changes the it changes the game. At least in the short term, but longer term, these are both going to be public companies. They're destined to be some of the biggest companies in the world. I think they are on the way. You know, NVIDIA, OpenAI, and Anthropic, it feels like in three to five years from now, you know, they are the sort of Apple, Microsoft, Google, you know, of this. Now we shouldn't count Google out either, right? Google, um it was just a few months ago that Google was sort of the bell of the ball with Gemini 3 and Nano Banana, and they do have some things going in the right direction. And so we we can't count them out either, I guess. Yeah . So Paris um you raised in our private little chat it's probably time uh Jason to earn some more money. It is. And then come back for some more stuff 'cause you've got three ads to get in before uh we turn into a pumpkin. You know, everybody loves to advertise on our Here Show, intelligent machines. They do four. We love them. Four today. Leo's gone and look what happens. The money comes caching in. The money's shaking out of each other. Jason's here. Get me there. Amazing. All right. So let's send it over to Leo to talk about another one of this week's spons ors. This episode of Intelligent Machines brought to you by Modulate. Everyday enterprises generate millions of minutes of voice traffic, that's customer calls, agent conversations, and sad to say, fraud attempts. Unfortunately, in most cases, that audio is still treated like text, right? Flattened into transcripts, which strips it of tone, more importantly, strips it of intent, strips it of r isk. Modulate fixes that. Modulate exists to change that. First proven in gaming, modulates technology has supported major players like Call of Duty and Grand Theft Auto. These games really needed it to separate, you know, playful banner from intentional harm, and they do it at scale. Today, Modulate helps many enterprises, including Fortune 500 companies, understand twenty million minutes of voice every single day by interpreting what was said and, what it actually means in the real world. This capability is powered by modulates, very powerful ELM. They call it Velma 2.0. I love it. Velma is a voice native, behavior-aware model built to understand real conversations, not just transcripts. It orchestrates a hundred plus specialized models, each focused on a distinct aspect of voice analysis, so it can deliver accurate, explainable insights in real time. This is an amazing technology. Velmo ranks number one across four key audio benchmarks, beating all the large foundation models in accuracy, cost, and speed because it's designed to do exactly this. Velma's number one in conversation understanding, number one in transcription accuracy and cost, number one in deep fake detection, number one in emotion detection . Built on 21 billion minutes of audio, Velma is a hundred times faster, cheaper, and more accurate than LLMs at understanding speech. And that includes the best: Google Gemini, OpenAI, XIA, nobody does it better than Velma. Most LLMs are just, you know, black boxes. Velma doesn't just assess a conversation as a whole, it breaks it down for greater accuracy and transparency by producing time stamped scores and events tied to moments in the conversation, which means you can see exactly what's going on when risk rises, when behavior shifts, when intent changes. With Velma, you can zoom right in. You can improve your customer experience. You can reduce risks like fraud and harassment. You could detect rogue agents and more. AI model can really do. Go to modulates live ungated preview of Velma. It's at preview.modulate.ai. That's preview.modulate.ai. See why Velma ranks number one on leading benchmarks for conversation understanding, deep fake detection, and emotion detection. Again, that's preview.modulate.ai. Now back to the sho w. All right. Thank you, Leo. And we have war more war news. We do. So before we could talk about this whole anthropic Pentagon open AI thing for another hour, I'm sure. The last thing I w toanted um to wrap it um is we ran a poll on the deep view. So we we have our our audience is um goes uh it's uh about a half a million people uh every day we we run the top stories in AI and we have a poll. And so in our poll we asked, should anthropic have acquiesced to the Pentagon's request to remove safety restrictions? All right. Before you before you knew the results, Paris, if she didn't look ed Uh no. What do you think the answer is gonna be? Yeah what do you what do you think the answer is gonna be? Overwhelmingly no . I'm gonna be optimistic. Okay. What do you think, Jeff? Yeah, I'm gonna be optimistic too. Siding we're siding with anthrop ic. Seventy-nine percent said no, they should not have acquiesced. Six seventeen percent said yes, uh five percent said, you know, other. Do you have any demograph any other details on the nineteen percent who said yes? Like are they do they have a sub bucket that's like corporate chill or something? I don't know, but you know, our our audience is pretty diverse. It's mostly professionals who work in the AI industry or who work with AI. Um and it's pretty diverse uh across the US uh and Canada. Um and so Canadians skew it then. All those good guys. Yes, yes. So the I I was I was surprised by that. Like that um that What about it was surprising to you? What was surprising to me? I figured it would be I figured it would be yes, the majority. But I thought fifty-five, sixty percent, right, will will side with the most anything these days you will find a minimum of thirty five to forty percent who will side with the administration. Exactly. Exactly. W ell as someone who's spent uh probably many hours at this point with our survey team at consumer reports, never I've had to um write these sort of things. There's I think that part of the reason why you get such an overwhelming response is the way the question The way the question was asked. It's it's true. It's definitely asked in a way that it makes it clear what the moral choice is. It's true. It was a little bit of a leading question. So But it's also kind of a leading scenario, you know? I think that that's a I would argue that that's an accurate way to describe the situation, even if it is leading. How would you how would you otherwise uh word it? Did did anthropic do the right thing or the wrong thing? Check one? Well no, probably I mean the thing I've learned about talking with like we have this whole team of like professionals I don't even know what the profession is called of surveyus, like and it's a million different caveats. It would be like uh thing being like uh a paragraph that dryly summarizes the debate uh providing arguments on both sides and, then says, like, do you agree or disagree with anthropic stance as stated, or something like that? It would be making it a lot more boring, opaque, and kind of hard to parse, which I think is perhaps a disservice to it's like would should an you know, anthropic made its stance uh on the two items that it believes should not have been um left to LLMs to do, whereas the government, you know, believes that they are elected officials and should be the ones that decide, you know, where do you stand? Boom, boom. Like something like that would have been a little more Yeah, but I think also I mean the part of the thing is there's like a million ways to slice and dice surveys. I'm not sure that it's entirely a useful endeavor. It's I thought this was the more because I wrote I also to full disclosure, I wrote the question. So like I thought this was the most obvious question to ask ultimately is like should they have acquiesced or not ? Well I argue that all surveys are are biased Yes. It's fair . And I'd also argue that anyone who's reading your newsletter and responded to the survey already has a more robust understanding of the situation than the average survey response. That's right. That's right. So that that's fair. Like they understand what an LLM is, they understand the risks, they understand hallucinations, they understand how long they get things wrong and they probably are less likely to trust them to do really, you know, important and and sort of existential kind of things . Is this one of the most overwhelming responses you've gotten? It is. It actually is one of the most overwhelming responses we've ever gotten to a question that went sort of one direction or the others off the top of your head. Claude 4 should Claude 40 I mean should uh ChatGPT 40 be uh be killed? I mean should we should we be allowed to marry a chat GPT 40? 85% say yes. Yes. Yes. Now I I do want to go to the Amazon question. There are some other war, you know, related things that we should touch on. So big old war section tonight, guys. Oh my gosh. Yeah, we do a whole a whole one. So Jeff, why don't you talk a little bit about the Amazon? This is straightforward. Uh that uh Amazon says the drone strikes damage three facilities in the UAE and Bahrain. And um there's no one's saying directly that they were targeted. However, um things are pretty well targeted these days. And um Amazon as an American institution uh bigger than Kentucky Fried Chicken uh in these foreign places uh with the internet and technology, with everything that's going on, um it's really interesting to me that um I think that American tech now becomes a um pretty clear tar get. Yeah. That it's a it's a really interesting um development that one of the things that are most known about American, about America and Americans are these companies, these global tech companies that are the biggest companies in the world are in some sense the biggest symbols of American, uh of what is American, in the same way that Coca-Cola might have been or Nike or other companies in past generations, the most iconic, you know, things, Kentucky Fried Chicken you mentioned, Jeff, you know, McDonald's, in in in a very real sense, the tech companies are the um emblems of what America is. And so in in that sense they are also the what we've learned now the biggest targets to if you want to um make a statement about your feelings. Rest of world had uh good reporting on this as well this week. It kinda captured the larger stakes. Uh which is that I mean, the Gulf has basically positioned itself as a safe harbor for the world's data, like to attract Silicon Valley. Like they've there's been like over two trillion dollars in investment pledges like made during Trump's uh golf tour last May. And it's been kind of positioned as quote unquote the third global center for AI alongside the US and China. And now, I mean, um, there was a uh researcher at Qatar University who told uh rest of the world the security frameworks behind the US UAE AI partnerships were built for supply chain control and political alignment, not for protecting buildings during a military crisis . And now I mean this just makes it increasingly complicated. It does. And I want to go back to what we were talking about before, too, in terms of Google, Microsoft, Meta, and Amazon. Yeah . They were all scared of pissing off the administr ation. Now they're scared also of pissing off the populace, of pissing off um nations, I mean Europe that are not necessarily aligned with what's happening out of America and Israel right now. And they're piss they're scared of pissing off Middle East pow ers. They're hot under the collar. This is not easy. hard decisions to be made, you know, in in those cases. Yeah, it's where would lay this I I remember somebody telling me like this is when leader that that's when you actually have to be a leader. Like most of the time when things are going well, your job is just to sort of keep the trains running on time, right? When things get hard and there are difficult decisions, like that's when you need a leader, right? That those the time when your leaders, you know, have to earn their money, they have to make very difficult decisions. And this is one of those moments where there's uh there are signals to to sort out and and try to um try to understand and i think the one that you mentioned jeff that maybe was the x factor is the populace. don I't think we expected, you know, like we we saw it in in terms of like this poll, but but also in terms of the way people voted with their downloads in uh overwhelming fashion with Claude over the weekend, like people have made a large statement about where they stand on this in ways that has really been in one sense encouraging, right, from a maybe a democratic process standpoint. I don't know if we want to call it that, but um and another in terms of almost like a level of engagement and activism, and I don't mean activism in maybe the traditional sense, but maybe just a level of not being um you know just uh signing bystanders, you know sort of I I g I I I'm serious about Jimmy Kimmel. I think it's a Jimmy Kimmel moment for AI. Okay. Um where um uh Viacom when it lived thought, okay, no big deal, you know. Uh I'm sorry, Disney. Disney. Wrong company. Um wrong megacorp. Yeah, wrong uh uh wrong wrong late show. Um uh Disney thought, well, okay, so th this is the obvious thing we gotta do, so we'll do it. Okay, no big deal. We'll you know, we'll we'll eat some crow with Kimmel and then figure it out. Uh uh nope. Got into much hotter water then, and uh and gave them the cover and the courage to say no to the administr ation. So that's what's gonna be interesting in all this. Yeah. Yeah. I I do think European regulators are going to sort of speak up too and saying, um no, we don't want to use tools that are used to autonomously kill people. We don't want to use tools. And and why are you just saying uh don't um uh surveil Americans? Why don't you have the same standard for the rest of us? Sure. Sure. What's w what gives here? Uh they're gonna get caught in that vice. If they could encode some of that, I mean what's been happening we've been seeing um because of the gridlock in the US, the the because of the uh you know, political division and the gridlock in uh in in passing laws, in in sort of functioning, the the uh the legislative branch has been really in in gridlock in not functioning well for a couple decades and because of that the um european regulators have been really setting the uh you know setting the standards on a lot of these things and then when they do it, these companies that are global companies, they they often prefer not to have two different sets of rules they play by. And so the European um standards will often be uh will often be propagated. Although we have start to see in that splinter in some ways uh the last few years. There are some things that features or products that aren't available in the same way, you know, in in Europe and that they are in the US and so And so we'll see if how sustainable that is long term. But there this could get codified. Some of these things that anthropic to your point, Jeff, like some of these things Anthropic has brought up could get codified by the EU and or other places and that could make that more of a have this sort of global impact on on some of these companies. That's going to be really interesting to watch. If you're anthropic or if you're a company that now doesn't know what to do, the question is who can give you c over? Oh gee we we'd love we'd love to do this, but we really can't. Look at all the implications. Yeah. I do think one aspect of this that has been interesting to me is um there's I'm probably bungling the precise details of it, but I've always heard that there's some part of um the terms for anthropic employees equity packages that say like by working at anthropic you have to recognize that we may very well make choices that reduce your equity to com absolutely nothing, make it absolutely worthless based on our moral and ethical standards that we have built central principles of the company. And it's like this is a perfect example of that. It's an example of the competing values inherent in trying to combine uh morality or ethics within a not only a capitalist ecosystem, but perhaps one of the most hyper capitalist ecosystems we've ever seen in terms of the AI raised it's interesting. This came up, you know, over this weekend as well, Paris. I'm glad you surfaced this because or or elevated this because I hadn't seen that before or heard it or read it. But but basically some people took the language and put it on Twitter and said like in the the employment agreement, it says we may make decisions, just as you said, that could take the value of the company, you know, uh or to zero, but that we will make these decisions based on our um you know essentially our company mission. So now I'm gonna play devil's advocate for for a minute. That's a proper Leo role, yes. Thank you. So um Nat Rubio Licht, who works for the Deep View, wrote a perspective or a commentary. We have this at the end of all of our stories, it's called Our Deeper View, where we're trying to really get to what's the what's the thing, right? Not just report the the news but but also get to what's the what's really going on here. And in in one of these um over the past few days uh what she wrote was that in one sense, um what else was going Anthropic going to do? They built the company on this mission that we are the safe AI, we are the principled AI, and we are the ones that's gonna put guidelines in place. And so did they really have any other choice, you know, like acquiescing would have been a good one. I would say absolutely they did.y sing Elever company in existence has m basically made the other choice. Google used to have a core principle of don't be evil and they went so far the other way they were like, We're scratching that out, buddy. So this is this is why you know we we published that argument just as it was and I I am really uh proud of the way that she wrote it. I thought it was very well reasoned and very clearly stated. When when we talked about one of the things that I said was similar to what you you mentioned, Paris, what we've seen throughout history is when these moments come, what the technology companies have all said is we make the tools. How other people use them is up to uh they sort of wash their hands of it. This goes all the way back to IBM and World War II selling technology to Germany. So I know we keep going back to Germany in the 1930s. But here we are. All roads always lead back there. Brownie face. For the disaster that it became. But that was what IBM said in the 30s. This is what Microsoft said in the 80s and 90s. This is what Google said, as you said in more recent history. Paris is the the technology companies always default to being sort of morally neutral. And we make the tools. What people do with them we can't really control. And so so one of my um you know counterpoints was that anthropic actually d saying no we will not do that. We have certain you know red lines that we that we can't cross because we are not confident that technology is can do this and do it well and the the consequences of it not working correctly are disastrous. And so uh and our core to democracy, our core to human rights, like we um we can't do it and we won't do it. I find that pretty unique in human history in terms of all these tech companies. I can't think of another example. Um other Jeff Jeff's example of the the drug companies is a good one, the pharmaceutical companies. Um that's pretty edgy. But that is as pretty much of an edge. Had you asked me a week or two ago what I thought anthropic would do, I would have said, Oh yeah, they're gonna cave keep their military contract, make sure they're not cut off from the supply chain like every other company. I will say it is very startling to me. It's surprising that they decided to literally put their money where their mouth isn't. So let me let me go devil's advocate again. Okay. Yes. I contend that the idea and I've done it on the show often that the idea of um guardrails is a lie. It's a general purpose machine. My example always, hello Gutenberg is um thank you, Benito, for the uh the the plug there for the full screen for those watching. Uh you can go back to the three three shot now. Um uh is that the printing press was a general machine. You couldn't have said to Gutenberg, okay, you can do this, but but there's this guy who's gonna be born called Martin Luther, keep him away from the damn thing. And and can't. 'Cause it's general machine. And AI, though I d I don't aggr I don't believe in AGI and all that, it is a general machine to the extent that anybody can make it do anything they want. And the guardrails are a lie. So in a sense, um on the one hand, Anthropic could have said, yeah, we we have no control over how people use our tools. Exactly the way you put it, Jason. Exactly that. We have no control. It can be used anyway. But then on the other hand, they say that it's not up to the tool to be controlled, it's up to the people. And so it's up to us to tell the customers, you may not use it for this. And there's plenty of other examples in there, right? I mean, when when I recorded the audiobook for magazine on sale now, um right. Uh at the end they they were gonna have me uh uh read a statement saying uh no company may use this in any form or any universe, this or in the future for AI, or we're gonna kill you. And I said to the producers, I can't say that. Not coming out of my mouth. Nah. And uh so so they're setting a restri ction. What are terms of service? They're all restrictions that are put on products, whether we read them or not, whether we follow them or not, is another matter. But companies all the time say you may not use this in this way. Okay. You may not do these things. So it's not about the tool itself being foolproof, it's about the need to tell the the people who use it how they may use it in your view. And if you don't like it, don't buy it, should be what should be said. I think it's also interesting here, I know we're going back to our first story again because it's so big. It's so big. Um what I don't one thing I don't understand about the timeline of all this from the beginning is that this was in their contract and in their rules,, right from the beginning. You may not use it for these two things. What was it just the war game that motivated them to come? Was it just wanting to be hard asses with anthropic? Yeah what triggered how did it get this bad? Yeah. We we are missing some information on how that unfolded and why and when. I think it's it's clear likely to come out in uh basically the coming days, weeks, months. Um but yes, I think we we don't know that and it will it will be potentially I think helpful to us in understanding the story. For now, I'm going to send it back to our good friend Leo to give us another one of this week's wonderful spons ors. This episode of Intelligent Machines brought to you by Zscaler, the world's largest cloud security platform. Look, we here at a at IAM know the potential rewards of AI. Uh and you probably should know about it too. It's just too great for your company to ignore. But we're also aware, and I hope you are too of the risks, not just, I mean, loss of sensitive data, attacks against enterprise managed AI, but also frankly, threat actors. Generative AI increases their opportunity, helping them to rapidly create phishing laws to write malicious code to automate data extraction There were one point three million instances of social security numbers leaked to AI applications last year. Chat GPT and Microsoft Copilot alone saw nearly three point two million data violations. You don't want that to happen to your company. You gotta rethink your organization's safe use of public and private A I. Just check out what Siva, the director of security and infrastructure at ZWAR saAys about using Zscalar to prevent AI attacks. With Zscalar being in line in a security protection strategy helps us monitor all the traff ic. So even if a bad actor were to use AI, because we have a tight security framework around our endpoint helps us proactively prevent that activity from happening. AI is tremendous in terms of its opportunities, but it also brings in challenges. We're confident that Zscale is going to help us ensure that we're not slowed down by security challenges, but continue to take advantage of all the advanc ements. Thanks, Siva. With Zscaler Zero Trust Plus AI, you can safely adopt generative AI and private AI to boost productivity across the business. Their zero trust architecture plus AI helps you reduce the risks of AI related data loss and protects against AI attacks to guarantee greater productivity and compliance. Learn more at zscaler.com slash security. That's zscaler.com slash security. Now back to the show . So I want to talk about one more big thing that happened. There's so many big things. Like we could go and talk about a lot of other things. But there is one thing. I know we could do a lot. Uh I've never seen a week like this. And I've feel like I've said that about three or four times so far in 2026. But here we are. I want to talk about a product that Perplexity has announced over the past week. And perplexity had been ru flying a little under the radar so far in 2026. But they released a product in the, you know, this has become the year of AI agents. AI agent has become uh the thing. Um Claude Code, so Anthropics product Claude Code and Claude Co-Work were were a big part of this. That has become an AI agent. Leo has talked a good deal about that and and the success he's had in doing some and automating some things in ways that that he has found incredibly helpful and and powerful. And then of course we've had the whole open claw, clawed bot, multbot, uh, mult phenomenon all of its own and uh and then openai of course hired peter steinberger and and then uh open clause become its own foundation um that has has again the risen to the level of consciousness, a new level, this uh this concept of AI agents. And then when OpenAI hired Peter Steinberger, they basically said, we're gonna let we're gonna have Peter come here and create AI agents for everyone. Uh we're gonna make AI agents that are just so much easier to use because you have to be a bit of a techie to to use uh either open claw or claude code and claude co-work. And so they're like, we're going to make this a lot easier. And then one week later, Perplexity released the product that they exactly the product that they talked about. And obviously they didn't do it in a week. They've been working on it for a couple months. But clearly the level of acceleration in the space and the level of being able to use these coding tools to elevate what engineers are capable of and the the speed at which you can ship new products um has just taken the the velocity of this industry uh to a a level that we've just never seen before. And Perplexity Computer. So full disclosure, um, I had a bit of an exclusive on that. I I published the story. It was on the top of Tech Meme on when they released it, you know, last uh last week at the end of last week and uh and got a chance to use it uh a little bit right away. Um but beyond that, so I can I can speak to it, but there was something else interesting, a couple there's like three things that this does that other AJ AI agents don't. And we can we can talk about. But the thing that happened with this that was really wild was that on Twitter, essentially the perplexity team, um, you know and I I have this on good uh on good you know terms from the perplexity uh folks that they said to their team like they do every time they launch a new product they're like hey this is our new product. If you want to tweet about it, you're you're welcome to. And usually you get like a handful of people that do it. Well, m un you know uh foreseen by perplexity, their team which had been trying this thing and like loving it, they went on Twitter and just like exploded Twitter with it. And so this thing spread far and wide really quickly and gained a little bit of a viral moment. Got helped by um one other thing, which is that the uh there was some people that combined perplexity computer is what they call their AI agent with uh perplexity um finance their their sort of yahoo finance competitor and they basically were like I use this to make a Bloomberg terminal um competitor and they were like I one-shotted this is the thing I one shotted it and I used this AI agent to build my own perplexity terminal computer and I just canceled my thirty thousand dollar subscription. And so that gave it a whole nother sort of level of of interest and and buzz. And so but this perplexity computer thing um is is really interesting. You know, Jeff, when you were talking about it before you were like, so is this open claw, this is just like an open claw for everybody that can use. And I thought that was the perfect that my my headline had a little bit, my very first headline had a little bit of that in it too. And I think it's a great way to think about it. So let me ask you two questions about that. Uh one one, I I think I asked whether it was uh open claw but ready for prime time. Is it in some way better safer, not just slicker but better safer than open claw, uh is that possible? And then second I mean I think almost anything is safer than open claw. Is it safe enough? Yeah, that's that's true. That's true. Yeah. Um though though thing you know, I just saw that that that uh there was a story I don't think I put in the rundown that um cursor is it what's their what do they call their their browser? Uh oh um comet comet comet thank you for putting it Comet could be I think one calendar invite until about a month ago, a calendar invite could uh corrupt everything you have. Uh which they they fixed, but that was an issue. But my other question is 'cause perplex the one thing perplexity has always done well is stay on top of P R. They're really good at like like common as an example where everybody they knew everybody was going to come out with these things. They came out with theirs first. They they do these uh sometimes outrageous things . Do you think that they wished they had released computer before OpenClaw or did OpenClaw open the door for them saying, We got something better? Yeah, I think you're you're right, Jeff. They like to move fast and they're they've pioneered a bunch of the things that eventually OpenAI and Anthropic eventually ended up doing, right? As you're as you're getting at. And so I think they do like being first. Um in this case, I do think that open claw gave them a chance to ride that buzz a little bit and like in a way that maybe AI agents would have felt a lot nerdier. I mean they still feel nerdy, but at least they cre open claw created a lot more curiosity, but it's very hard to use. You have to be very technical. It's very command line oriented. There are some hosted versions of that you could get that a little easier but did you get to play with with with computer? I did. So you only have you can only use it if you have a perplexity max subscription, which is their two hundred, two hundred and fifty dollar, you know uh subscription. So I had a version of that that I could test it with. And there are just a few things that it does really well. But Paris I can see you're dying to to My issue is that it's a terrible name because Jeff just said did you get to play with computer and that made me involuntarily laugh. Like they might be good at PR, but naming your product computer, it's not gonna work. How am I gonna tell my mom to download computer? Download computer. Um Mom, you gotta get on computer. I know. I had the same first reaction um, Paris. The funny thing is how quickly I'm like, okay perplexity computer is like PC right and so it's like it's it's basic works it's so generic yeah that it's that it's bad but also with the sense like you don't have to remember a whole lot about it, you know, at the same time. It's but the but the pro the product itself is interesting. It does three things really well and interesting, I think. In these AI agents, so this is where it gets super nerdy. In these AI agents, you have to have an API key, which is essentially where you pay per use for any because these things use a lot of what are called tokens. That's AI inference. And I know this is a lot of things, but they basically every time you use one of these AI models, it's expensive. And right now, if you pay your $20 a month plan, you know, you you are typically somebody who probably doesn't use it a lot, or most people don't, so you don't, but these AI agents use a lot of computing power, if we just put it that way. And so if you're using them open claw, if you're using even clawed uh code, you have to have an API key because basically you're paying per use. If you use a bunch more, you're gonna pay a bunch more. W hat the first thing that perplexity did to sort of make AI agents easier is they do away with all of that. And that's why they only have it on the expensive plan for now. Because they essentially are giving you a bunch of extra, you get a bunch of usage when you use that plan. Um now if you go over this massive cap, um, so if you're a really hardcore coder or something, fine, you'll probably still have to pay. But if you don't, if you stay under that, you you're just gonna use this like everybody would. That was the first thing. The other thing that it does that's really unique and interesting is Claude Code only uses anthropics models, right? Yeah, I mean I think that's the actually a really notable and interesting thing here is that you can use you could have like different models 4.6 for core reasoning, Gemini for deep research, grok for yelling at someone on the internet, chat GPT for sycophancy. You could have it all. It routes your queries to the best models, just like as you're saying, you know, it knows which ones are good at which things um and that part is is pretty good that's the second thing the third thing it does is when you do it say you want it to build an an app say you want it to build um the the Paris app for scanning Twitter and giving you story ideas that have to relate to topics X, Y, and Z. And then you also want to share it with like one person on your team, perplexity computer can do it. It can deploy it to the web and then you can share that URL. Whereas if you had Claude Code, if you had Codex, that's OpenAI's program, if you had Replit, you know, Cursor, um, if you had one of those, you have to make the code, then you have to go deploy the code to a server somewhere. The one reason why, and they learned this from lovable, if anybody, for those who are familiar with that, that's a code, you know, uh uh an AI agent sort of coder where you just make the thing and it deploys it right away on Lovable. And then you can say you can make your thing in ten minutes and then you can send a URL. You just made it a web app and you've got Gemini I'm not positive about that, but I think you're right. I think it does, it is working with Gemini. It's a company that's based out of Europe, Northern Europe, but um we like that. Yeah. They are uh they are uh a company that has figured out that one piece of like, oh, if we let people make the thing and deploy it right away, that's a big plus. Well, Perplexity Computer learned that and it does that as well. So for example, I had it make an app and I I had this app test that I have where I want it to go and every morning scan all of the sources, you know, and I tell it, here's a bunch of sources, scan these and a bunch more like it, and it'll make like you know 20 sources for me. And I had I had ChatGPT and Claude, you know, make this uh query for me to make a essentially a morning news scanning app for me. I took that. Most of the code, most of these programs break on it. When I do it, it either doesn't do it right, it messes something up. Lovable did the best job of making it and then putting it something on the internet that I could use right away. The only other one that could do that was Perplexity Computer. I put it in there. The first time is the very first thing I gave it was this kind of somewhat complex make me a morning, you know, AI news gatherer for me, it did it right away and it deployed it and I could send the link out. And I was like, whoa, okay, so this is more powerful than lovable. It deployed it as well. You didn't have to go in a terminal. You didn't have to put it on a server. You didn't have to do any of I've been arguing about this with Leo because Leo is nerdy and he loves nerdying out and and he wants everybody to be using terminal. And I'm saying you're not going to scale at that level. You're not going to scale if people have to install things to run them. You want to say, look what I made, world, with a link. I'm nerdy and I get nervous when I'm in the terminal. Yeah. I mean, I still do it, but it's just even the barrier of going from Claude Cowork to Claude Code is a lot for the average person. For sure. For sure. So perplexity computer, it is the qu the name is a little questionable. The project though is really promising for the reasons that, you know, Jeff, you just mentioned, like the ability to be able an average person to go in, describe what they want, and it spit out a thing that you can just take a link and send it to anybody. That is the thing that is really powerful. And then as you mentioned, Paris, the fact that you can do it, it can essentially do best of breed models across all of these labs is also a bit of a of a superpower. So really interesting. I'm gonna go now. I'm gonna send it back to Leo to do our last um sponsor for the show, and then we'll come come back and talk a little bit more about some to ols. This episode of Intelligent Machines brought to you by OutSystems, the number one AI development platform. The agentic shift is happening. You know that if you listen to the show, we're really moving beyond simple chatbots. And here's the good news: OutSystems is leading the agentic conversation. Out systems helps businesses build AI agents that can actually do work. It's amazing. Things like taking actions, making decisions, integrating with data rather than just answering questions. Out systems is solving the talent gap. There really aren't enough AI engineers in the world, but out system empowers the developers that your company already has to build at an elite level. It's like a superpower for devs. Out system is the secret weapon behind the world's most successful companies. And not just for the little apps, these are for massive, complex systems. Systems that run banks, insurance companies, government services. Out systems even helps companies with aging IT environments bridge the gap to the AI future without a rip and replace nightmare. And I can give you an example. I can give you several. They helped a top U. S. bank deploy an app that lets their customers open new accounts on any device, delivering 75% faster onboarding times. They even helped a global insurer accelerate the development of a portal and app for their insurance agents, delivering a 360-degree view of customers, enabling those agents to grow policy sales. That's just a small sample of what out systems can do. Out systems combines the speed of AI with the guardrails of low code. It's actually a marriage make it in heaven, the safest and fastest way for an enterprise to go from we need an AI strategy to we have a functioning AI application. Stop wondering how AI will change your business and start building the agents that will lead it. Visit outsystems.comslash twit to see how the world's most innovative enterprises are using AI powered low code to transform. That's out systems, o-ut-s-y-s-t-em-s dot com slash t-w-i-t to book a demo and see the future of software development. Outsystems.com/slash twit. And we thank him so much for supporting intelligent machines. And now back to our intelligent hosts . Thank you, Leo. All right, so now it's time for the picks of the week. Paris, would you like to get us started? I will. And this was an important pick for me to do in the week that Leah's not here. Listeners of the show will know that uh last week one of my picks was the New York Times crossplay app, which I'm somehow I'm somehow more addicted than I was last week. I perhaps have I have more than twelve games going on with the I was really worried at first because I got such a bad rack of tiles when he started playing, but I beat him in the end and we are rematching and I'm still but one of the reasons I think I'm beating him is I've gotten really into over the last couple of years more scrabble strategy theorizing and, there's a great book and online resource called BreakingTheGame. net that is like all beginning, intermediate, and advanced Scrabble and Scrabble tournament uh strategy. And you know, I'm choosing to show share this in the show because I've I shared this then last week with one of some of my other friends that I'm absolutely dominating in Scrabble, and it has not improved their ability to play, so I feel safe sharing it in a place where Leo could hear it. But I don't know, check it out if you want to beat your friends more in Scrabble. It gives you some good uh some good stratagems to be thinking of the fact that Leo from our WhatsA pp. Yeah, we're a WhatsApp family now. I mean our board right now is a carnage. So we've really we've really played ourselves into several corners, none of which are particularly good. But um I was very worried that I wouldn't beat him in the last game because he last the last turn um was like up by thirty points. But I think I ended up beating him by two in the end. So I don't know get on the New York Times crossword app and use breakingthegame.net to trounce your friends even harder. That's my pick of the week. Stone cold killer gives the pick. That's me. Very good. All right, Jeff. How about you? All right, this is my one. I have more than one. I want to mention this this story just for just for the record to put on there that News Corp did a big deal, a hundred fifty million with Meta and the Robert Thompson, who I disagree with constantly about all matters of internet and all that, he said that that News Corp is now basically an AI input company, which I found amusing, but that's not my pick. Um I could do a few different ones here. I could do a paper that's out that says we don't know how social media bads will affect youth, but we're doing it anyway. But I'll leave that beside. Um I could do uh a nice New York Times feature about Bell Labs and all that has happened there. This is why I wrote an op-ed a couple years ago now, begging for the soon to be vacated Bell Labs in Murray Hill, New Jersey to be turned into a museum. Yeah. But instead I'm gonna do Walkman land. So Par is are you too young for Walkman? No, I had one once. And they also came back, I feel like, in the last five to a bit. So if you go to Walkman Land Well I guess we can't show it. Can we show it when you know or no we can't I'm working on it. Uh he's working on it. Sorry I should have warned you, Badito. Okay. So it is uh I th I knew Paris would do exactly that, and ooh. Why don't you? So this is pages of walk men. I always did the plural walkmen, not walkmans, walkmen. Walkmen is right. Walkmen. Good. Good. That's correct. I had this one. I had this one. I had this one. You did. The Iowa HSPS008 is so pretty. Honestly, a lot of these are very pretty. The Phillips AQ 6492 is gorgeous. I'm up to 17 pages. I'm trying to see how many pages there are. 20 pages. It goes on and on and on and on. All the Walkman models. 52 pages. Yep . That is crazy. 52. Jeez. Geez . And it was it was it was a life changing th well oh some of my friends have the Sony TCM forty five zero zero, the my first Sony range. That's a real popular one and among the Brooklyn crowd nowadays. Oh, you mean like today they have one? Like right now? Today. I I know to at least two people who have that this I'll put it in the chat in their home right now. And I always see it at people's apartment and then take a photo of it and then never look it up . Which I know now I know. This was this was a huge change, right? People could take their music anywhere. The bigger change of course came before that. I went to Greenbrook Electronics today, which I can't wait to take uh Paris and Leo there, 'cause it's this weird kind of dusty museum store. And I had to buy a transistor for a class I'm teaching Friday. I'll put this in front of the camera. Oh little guy. Transistor of course replaced this the tube. As Benito pointed out earlier, uh he still uses tubes 'cause he's an audio freak. Um so anyway, um this is what changed everything because this is is what enabled the portable rad io. This is what made made it so you could take music with you anywhere. The transfer operated. Right. Amazing. Uh and but then that was you were stuck with radio. You were stuck with DJs. You were stuck with all that. The walkband gave you the first control. And so I think it's important. So that's that's that. I want to I'll mention one other thing. Uh where is it here? That um according to Edison Research, podcasts now lead AMFM in spoken word listening. Really? First time this is it's crossed the Rubicon. I'm kind of thinking I'm surprised that didn't happen before, but there's a lot of our grandparents who still listen to AM radio. Yeah. And like in in factories and stuff like that, they still leave it on all day, right? There there are lots of uh and even um restaurants like the back kitchen and the off back offices and things like that. A lot of the Brooklyn girlies also have AM radios, I will say. A lot of a coolest one is one of those, like it's a under cabinet AM like r or a radio setup. It also has a cassette player in it. And one of my friends who has one of those is moving to Chicago and I'm hoping that I get it in the Oh. Okay. You know the there's also this kind of um uh comeback of the of the iPod. So um because it has no iPod is so hot right now. It's it's like actually expensive to get an iPod. So that's not just the New York Times making up a trend. No. As a matter of fact, I saw um the the uh oh uh Tony Fidel, who was one of the you know creators or or was on the team, you know, him tweeting out, he's like, Look, I don't know if Apple's gonna start making it again, but he's like, Look, it's official, like this this is going, um, this is going pretty big. And and I think a lot of it is it is the sort of anti um screen time device you know where uh no notifications no adds when I roll my eyes. I I do a little bit like could I imagine going there? Probably not. But anyway, it's it's also the flip phone. I know some people that have done the flip phone, you know, thing as well. Um but one of my friends has a flip phone and I ridicule him every single time. As you should. It would be like going back to the tubes when you have the transistor, you know. Yes. It doesn't. But he also got one and he like tries to text from it. And we're in group chats and I'm like Rick, you can't be doing this. It's rude. The iPod thing is so weird because like for the longest time we were just like just put this thing in my phone, please. Put this in my phone already, please. And now we're just like separating it again. That's funny. Now we're like, I want the iPad, the iPod back. All right. Well, for my pick, mine is going to be something that I feel like should be so obvious on its face and really should be a feature and not a company. And yet I almost like can't live without it on a daily basis, which is this app Whisperflow that lets you, you know, on I use it on Mac and I use it on uh you know phone as well, where you just hold down one button and you can dictate to it and it uh it essentially um puts it into clear text, you know, it'll it'll sort of correct it and make it uh make it clear into complete sentences. And I feel like this should not exist, right? Like Alexa, Siri, Google Home, Google Assistant, like should have done this really well a long time ago. But you know, it's funny that for all of the challenges that Apple has in AI, if it would just buy Whisperflow and make it so that when you talk to Siri, it it actually works every time because this thing essentially works every time or 90% of the time. Um, it the perception of what Siri being better, uh, the way that it would go up so dramatically would be incredible. And so that's what makes me think like this is um really something. There's a couple other ones like this, a couple competitors as well. Whisperflow is probably the best, um, the best known. And I find it to be, you know, the sort of one of the easiest to use, especially on the computer, because all you have to do is hit the function key on your on a Mac and then it'll it pops up and you can use it for anything. The thing that I found, there's two things that I I love about it. One is that um it tracks how fast you go. You know, most people can type. If you're really fast, you type like 75 words a minute, right? The average person is like 45 to 50 words a minute. And I think I was I'm about in that range. You know, when you speak, you you're up to about 150 words per minute, 125 to 150 words per minute. So I've noticed now there are times when I can't do it and where it's like I'm in a cafe or something that's a little weird. But the whisper is the right, because you can actually do it where I could like say it like this I could just whisper and it actually works. Whisper into whisper flow and it works. Which is pretty which is pretty cool. And then um you know the the other aspect of it that um that I really uh that I really enjoy uh is the fact that it sort of does it sort of gamifies it a little bit, right? You can you can get the stats on on how fast uh you're you're going and uh and you can do it like i can i can be in bed and everybody's asleep and i can like whisper into it and my note and then the last thing i love about it is i've i do my best thinking when i walk uh and i've never, I was like if i, could only type, I get my best ideas to write when I walk, um, but if I could only type, and so sometimes I would, you know, type on my phone, leave notes and Apple notes and stuff. But with this, I I have actually started writing some things while I'm uh walking and it's been super, super handy. So that's mine. It's like the the AI feature that really shouldn't be an AI feature. I feel like every one of these But AI is so good at it. I mean I use uh I've talked in the show, I have a thing on my computer, Mac Whisper, that allows a bunch of whisper kit transcriptions and that's how I transcribe a lot of interviews and it's phenomenal. It's local. I love it. So good. So good. So if you're if you're still typing most of your stuff out there, folks, you know, you could use this tool, use another one too. You gotta use your fingers. I gotta use my fingers. I got to. That's that's part of my my book. Uh uh Hot Type. The The End is a is a is a typographical autobiography where I talk about how the keyboard changed the way I thought and then the computer changed the way I thought. But I can I can't unless my fingers are poised over the home keys, I can't think. No. Interesting. Interesting. Uh so it's uh it's an interesting world. It is an interesting world. I uh yeah, those are those are all fun picks. Those are all really, really fun picks. What are we? Well Paris and Jeff, thank you for letting me be here do this with you. What a pleasure. Really, really good job, Foley. Thanks so much for steering the ship. This was phenomenal. And we're finishing before a large cane emerges from offstage to drag you out of the podcast studio booth, which is that's right. There's the cane. All right. Well appreciate it. Um thank you everybody for tuning in to Intelligent Machines. Leo Laporte will be returning, so uh you can count on that. And um thank you for uh a great week and we will of course this show is back every week and uh Paris and Jeff will be here again and Leo Laporte will be back. And you can get and you can count on even more news that's going to happen, however big the news was this week. It's going to be even bigger. Never stop. I'm sure. It never stops. And where can people go to follow your work, Jason? Yeah, thank you, uh Pear. So so thedeepview.com uh you can find me there. Subscribe. thededeepview.com is how you can get our newsletter uh every every day we have a send of the newsletter with the top stories in AI. We pick three stories and we uh also try to unpack them. And then you can also find me if you want my updates in real time on uh on Twitter, God help us all, uh at uh x.com slash jasonheiner. And uh yeah, thank you again and have a great rest of the week. Hey everybody, uh Leo Laporte here and uh I'm gonna bug you one more time to join club twit. If you're not already a member, I want to encourage you to support what we do here at TWIT. You know, twenty-five percent of our operating costs comes from membership in the club. That's a huge portion and it's growing all the time. Uh that means we can do more, we can have more fun. You get a lot of benefits, add free versions of all the shows, you get access to the club to Discord, it's special programming. Like the keynotes from Apple and Google and Microsoft and others that we don't stream otherwise in public. Please join the club. If you haven't done it yet, we'd love to have you. Find out more at twit.tv slash And thank you so much . I'm not a human being, nodding to this animal scene. I'm an intelligent machw
This excerpt was generated by Pod-telligence
Listen to Intelligent Machines (Audio) in Podtastic
Podcast Listening Magic
All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.