IN
Intelligent Machines (Audio)
TWiT
Philosophy of public discourse and AI
From IM 862: Ménage à Claude - AI, Human Agency, and Economic Value — Mar 19, 2026
IM 862: Ménage à Claude - AI, Human Agency, and Economic Value — Mar 19, 2026 — starts at 0:00
It's time for intelligent machines. Paris has the week off, but Father Robert Balisaire joins Jeff Jarvis. And just in time for a great guest, Roman Chaduri is here. She's the founder of Humane Intelligence. She says we need to take back agency when it comes to AI. We'll talk to Ramon Chowdhury next on intelligent machines . This episode is brought to you by OutSystems, a leading AI development platform for the enterprise. Organizations all over the world are creating custom apps and AI agents on the OutSystems platform. And with good reason, build, run, and govern apps and agents on one unified platform. Innovate at the speed of AI without compromising quality or control. Out systems is trusted by thousands of enterprises worldwide for mission critical apps. Teams of any size and technical depth can use out systems to build, deploy, and manage AI apps and agents quickly and effectively w,ithout compromising reliability and security. Without systems, you can accelerate ideas from concept to completion. It's the leading AI development platform that is unified, agile, and enterprise-proven, allowing you to build your agentic future with AI solutions deeply integrated into your architecture. Out systems, build your agentic future. Learn more at outsystems.com slash twit. That's outsystems.com slash twit podcasts you love from people you trust. This is twit This is Intelligent Machines with Jeff Jarvis and Paris Martineau, episode 862, recorded Wednesday, March 18th, 2026. Menage a claude. It's time for intelligent machines, the show where we cover the latest AI news, robotics, and all those smart machines all around us these days are getting smarter and smarter. Uh Paris has the week off, but I'm very happy to say we've got Father uh uh Robert Rob Roberto. I was gonna call you Roberto Balasero. Robot Roberto. Father Roberto is here. Uh from of course he's visiting us from uh the Vatican. It's not a joke, folks. That's him. Uh hi Robert. Great day. It's always wonderful to see you. It's always a great day when I get to get to see you and the twid army. I miss y'all. Yeah. Robert used to have a uh little place in the basement of the uh old Twit Studios. That's not a joke. That's actually not a joke, it's true. Uh also of course here the uh professor emeritus of journalistic innovation at the Craig Newmark Graduate School of Journalism at the City of New York Craig Newmark. Jeff Jarvis, author of the Gutenberg Parenthesis magazine. His new one, Hot Type. Now delayed. That you can still pre-order it. You can. You can. Yes. Gives you no advantage. It's now July, you said? It's August. No, that's that's death for books. No, so they're moving it to the end of August, so it's basically a fall book now. A fall book. Yes. Now please you brought us, I think, one of our most interesting guests yet. So would you introduce Ramon Chaduri? Well I'm gonna have um the egotistical uh uh uh joy first of announcing something else that will lead to remote So big announcement. Uh I don't I meant to have Benito get some trumpets or or drums or something. So I am proud uh and and amazed to announce that uh uh uh Bloomsbury Academic is uh launching a new book series called Intelligence A,I and Humanity, which is not a technical book series, but it is a book series enabling writers from many disciplines to reflect on AI and how AI reflects on humanity. And below, say it say the title again. Wow. And I will be editing the book series. Oh man. I can't believe it, but I will be editing the book series. So I'm very proud to say that we have uh we have uh signed up our first three authors. Um I'll mention the other two first. One is uh uh Matthew Kirchenbaum, who's been on this show, who's writing a book about the textpocalypse. Another is uh Charlotte McIlwain, who is at NYU, who's writing a book, a very hopeful book, surprisingly, about uh race and AI and the opportunity to uh undo um the oppression of technology on race. And then we have with us, I'm very happy, very proud to say, uh the author that I just was dying to get to be the first author in this series, Dr. Rahman Chowdery, who's writing a book about asking the question is what is intelligence? So Oh, that's a great question. Isn't it perfect? That is the fundamental question, if you ask me. So Ruman is is has a PhD in um political science. Uh she uh is the founder of Humane Intelligence, which she'll explain to us but is an effort to hold AI companies accountable. Uh I know her name from Tw itter. Where you were you were responsible for ethics at uh I was. I was the engineering director of machine learning ethics, transparency and accountability. This is before El on. This is oh yes. I I always say I worked at Twitter and not X. Like shocker, I know you'll be shocked to hear that my perspectives and his Um where in twenty twenty three she co organized the largest generative AI red teaming event in history, putting eight major AI models in the hands of four thousand people to probe for vulnerabilities. I was one of them.. Ye Yeahah, Dave Robert was there. Uh Roman, we're so thrilled to have you uh on intelligent machines. What is intelligence ? Ooh, okay. So I I've already written chapter one of the book. So let me just let me preface this with my with my twisted point of view, and you can say I'm crazy. We've been trying for decades to y duplicate how humans think with computing machines. And a lot of people say, well, you can never do it with a von Neumann architecture. It's just that's not how humans are massively parallel, blah, blah, blah. But what's it I think to me very interesting is that once we started using transformers and started building these large language models with transformers, they have become, they seem to have become more and more, dare I say, intelligent. They act they seem more like humans. Not, you know, a poor imitation, but nevertheless, it has made me think lately a lot about, well, what are we then? I mean, literally all of us are just the sum of our dare I say training over this over our lifetimes. Perhaps we're born like an LLM, maybe with some instinct that uh it forms us to begin with, but then as we grow up, we learn language, we and we learn all this through example, much like a machine does. So I'm really thinking that one of the most interesting parts of AI is what it teaches us about our own consciousness. Uh well absolutely so I wan na tease apart many, many points you make that actually I've already started exploring in the book. And as as I I I I wasn't kidding when I said I've written it in chapter one, this is a an aggressive aggressive writing timeline because Jeff would I think they want to launch the first book Q one or Q two of next year, which means I have to be done writing it by August. And we want your book. The first book in the series, right. So back to the fundamental question, what is int- So there's the what is intelligence question? And then there's the how do we measure intelligence question? And then there's the intelligence versus sentience question, right? So cognition does not necessarily mean sentience or consciousness, because you've said the word like consciousness, right? So one is every measurement of intelligence that we have today is fundamentally rooted in economic value. So the first part of the book really goes through intelligence is a social, economic, and political construct, right? So why do we care? So the basic question I ask is: what is it that is really like striking us all existentially? And it's not just that these machines are performing the way we perform, it is that our sense of self-worth and value is driven by this notion of intelligence. But if you go back to how intelligence has been measured, it was constructed in the first industrial revolution. It was constructed around so this is Alfred Binet, who was asked by the French French government to find a way to classify kids in classrooms, to determine who would be a good factory worker, who might be a good manager, who would be organized, who wouldn't. So it was always rooted around productivity. So today, when Sam Altman says artificial general intelligence is the automation of all tasks of economic value. And we're like, what? And it hits us hard in our core. It's because the fundamental basis of what we call intelligence has always been about workforce productivity. But is that what intelligence really is? And then we get into like the social and political ramifications, right? So politically and socially, why do we care if we are intelligent or not intelligent? Well, one aspect of it is that rights are given and denied based on it, right? So justification of why it was okay, quote unquote, to enslave black people was in large part rooted in concepts or intentional misconceptions about intelligence. I.e., you can treat these people like animals because they are no smarter than animals. W omen. Why are women not allowed in higher education? So oh, because your little brains could not handle it. Your intelligence is not there. So we make these presumptions, we design these tests to prove the points we want to make. So to your point on the you know AI is a mirror, I would even say our construct of intelligence is more about the fears of the economic ruling class and their attempts to categorize us and put us quote unquote in our place, then it is a measure, an objective measure about anything. So the problem is when this goes into computer science and we have the Dartmouth Conference, these men, they're all computer science and mathematicians, sit down with actually a very simplistic understanding of intelligence. So they presume intelligence has been mapped. We know how to measure intelligence and people. That's their starting presumption. So the second presumption they make, which is incorrect, is that okay, well, we can break down this thing called intelligence into its aggregate parts, and you could just sum it back up and it'll be intelligence and break it back down. So if you know like basic systems theory, there is no system in which you can just sum up the parts and then you get the system. The system itself has some residual impact. So there are like a lot of things. One last thing. So the other thing that interested me is in science, right? How have we explored measuring intelligence in not humans? Because one assumption about computer intelligence is for some reason, because we are a very species centric animal, we have just presumed that human intelligence is the thing to model, right? But then what if we look at other ways of looking at intelligence, animal intelligence, mycelial intelligence? There's a whole field called extraterrestrial intelligence. If we go to Mars and there's a moving sli me, how do we know that slime is intelligent? And like whether or not we should and why again, why does this matter? Well, because it can lead to ecological ramifications. It could lead to so many other things, right? So there are fields of study, and by the way, like newsflash, in zero percent of these fields do they base intelligent measurement on human capabilities. In fact, that is almost the first thing you are told not to do. Because animals and mushrooms, etc., have different ways of proceeding the world that are actually better than ours in some way, worse than ours. But what you don't do is give a monkey a set of physics questions and say, well, obviously we're smarter than you because you don't know what physics is. So again, you flip the fliplip-f the script and say, well, then why have we decided that these machines need to be modeled after people? It seems like a pretty self-fulfilling prophecy then, because these CEOs sat down and they were like, oh, we need to do is model the human brain and automate all the economically valuable things this human brain can do. So what we feel is really not an attack on our intelligence, but you know, it's it's more it's more visceral. Like they just want to get rid of us as intermediary economic bodies. Like they want to like i i I saw this TikTok where this woman said something like companies seem irritated that they need to go through us to get to our wallets. And that is how AI feels let's just gotta say the money directly. It would be so much so much easier. Right. But I also think that there is an existential dread that comes from the thought that maybe we're not spe cial. That maybe what we have is is a kind of intelligence. When you say, you know, slime mold might be intelligence, that's threatening too. Right? We want to think that we are somehow spe cial. Well absolutely. And and and to your point, it goes back to how we construct intelligence, right? So if it is constructed on economic productivity and then we make an economic productivity machine, then we're like, wow, we're not that special. So then you know the the last part is really it's like I've been playing with the idea of calling the book something like the new intelligence or something like that. It's like well, like wait, let's go back and then let's say given that we have created a machine that can surpass us in the way we have defined intelligence, our current measurement of intelligence. Right. Let's actually create a method of understanding intelligence that maybe is divorced from workforce part. Because there are, by the way, many methods of intelligence. So Gardner's multiple intelligences, right? There's kinesthetic intelligence, like spatial intelligence, like dancers, for example, have this um they've built an intelligence where they understand proprioception, their body and space in a way that like you and I could not, right? Because we are not trained in that intelligence. Empathy is a form of intelligence. Resilience is a form of intelligence, right? There's all sorts of things that are not measured in SAT tests that we therefore do not value, that maybe we I'm eager to hear Padre's view on this. Yeah. I I absolutely love this idea of linking our understanding of intelligence back to the industrial revolution because yes, that that was such an upheaval in society that it makes sense that that's when we were trying to quantify the definition of intelligence that we use today. In my tradition, there there's a there's a a little bit different of an angle on it, and that is to separate this idea of knowledge and understanding from intelligence. The knowledge to be able to do a task, the knowledge to be able to complete a process. However, intelligence requires agency. And agency is that intentional desire to act upon knowledge in order to affect the environment in which we live. And not just to affect the environment, but to take accountability for the intentional actions that we take. So for us, for from my tradition, intelligence looks like knowledge, but it has that additional step of agency, which we still don't think that LLMs that current AI has, because it it cannot act as an agent. It can only act as a source of knowledge. But but I I mean I am absolutely tickled. I I love this idea of using the industrial revolution because I you may know that Pope Leo is big on the document Ray Rum Navarum, which is what the Catholic Church released during the Industrial Revolution, to introduce this idea of agency and bring this idea that there is something innate and special about humanity, which is what Leo is talking about. So how do we do you have a a working definition of intelligen ce? So for us, yeah, intelligence would be uh the ability to take uh um knowledgeable understanding of the world and act in an intentional way to influence the environment based on values, goals, and beliefs. So for us, that's that's human agency. That's the s the step in intelligence that we don't think AI currently has . Roman, is have is this part of your book defining intelligence ? In in a way, I I frame it because I'm a social scientist, I frame it more like sociotechnically. Like what is it it's not enough to just define it like I'm not a philosopher. What I wanna do is understand it in the context of the world, right? So what are the ways in which we have defined intelligence, maybe even just sort of judgment agnostic, and say what has that meant in how things have been executed? Because again, the fundamental question to me was always like, why is this idea, why are we so scared of this thing? Why are we so scared of it? And what is it, what is it forcing us to look at or question about ourselves and what do we feel threatened about? And really, again, like that's how I got to where I am. Um but Robert, I love, I love what you're saying about this idea of intent and agency and and this is where you know we shift from in whether it's intelligence to sentience or conscience. And people conflate the two a lot, all the time. And again, like if you talk to the average person on the street and you ask them what they think artificial general intelligence is. They think of something like the Terminator, you know, like her, like the you know, Scarlet Johansson's robot and her, or like the the AI. And those things had intent. They acted with desire. And there's nothing about these machines. And also, by the way, this narrative is being uh pushed by tech companies. It's very, very intentional. Why? I coined a phrase back in what 2017 or 2018, moral outsourcing, where essentially companies anthropomorphize these models on purpose so that when something goes wrong and something is bad they can say the AI did it. Right? The AI did a thing. Oh and and you you see them doing it today. Right. You see and you see you see it starting with um all of the tech layoffs. Jack Dorsey saying AI is taking jobs because AI is making it easier. Like, sir, you invested in a bunch of crypto that tanked and you overhired like not our fault. That you can put the blame on when when bad things happen. Such a great phrase. About uh our system, our way of thinking of intelligence, is if you look at intelligence as that combination of agency and knowledge, there is this fear, and it's a very rear fear, rear real fear, that there are humans who do not meet that definition of intelligence. So do not reach that level of agency. So that that is that should also be on the board. And so I I I love that and I I especially like it because one of the things I'm very, very focused on right now is the future of education, the future of work, right? And this is these are like institutional flaws that predate AI. AI did not make you know the educational systems fail our kids. AI did not make it difficult for a child for a recent college graduate to translate their degree into a job. Like that existed before. How many of us work in the field, well maybe some people in this room worked work in the field that they studied when they were under most people don't, right? Most people studied something and they ended up somewhere totally different. And we've just sort of accepted that. Most people will say what I learned in college has nothing to do with what I did even at my first job, right? Um I certainly am not in, and that's that's fine. There's nothing wrong with that, but then we need to re-examine our institutions of pedagogy and say, well, how have we been teaching and what have we been teaching? And I have very strong thoughts about like decisions that have been made in the educational system. But fundamentally, the purpose of education, so just to get like very specific, because again, AI and education is something I'm looking a lot at lately, the pedagogy of AI is very, very problematic because we teach AI in general as a tool of productivity, not a tool of mastery. Right? So if the purp and we've done the same in education and I like smart quote unquote smart kids know how to game the system they're good test takers they know how to do on the SATs they know you know like exactly what to say to the teacher and what they should write in their essays, some of them happen to love learning, not all of them do, right? So we have taught education as a as a an institution of productivity. Produce X, Y, and Z, and then you know you'll get into Harvard or MIT or Stanford. And then we make, again, we make this tool that is a tool that we're teaching as a tool of productivity. But there is research, by the way, which is excellent into AI as a tool of mastery, but none of it's being taught to kids that way. So like it is not it it my I guess my fundamental point is like it actually has nothing to do with the technology specifically itself, but how we are framing our usage of it. And that's also what's driving a lot of the fear. We're talking to Roman Roman Cho Chowdh, sorry. We're talking to Roman Chowdhury, uh uh the uh founder of Humane Intelligence. Uh there's a nonprofit and there is a public benefit corporation. Uh tell us about humane intelligence. What's your goal here Yeah, so the nonprofit was founded to build the independent community of algorithmic evaluators, which is very, very needed. So right now, essentially tech companies write their own homework, grade their own tests, and pat themselves on the back about how smart they are. And then you, know, and when anybody listening to this podcast or sitting in this room tries to use AI, however like impressive it may be, it's like, you know, it's it's like it's like a smarty pants technology. But if you try to use it for something like very fundamental and real, you'll see it falls apart very quickly. And there's all these memes like I can't spell strawberry and and kind of all this stuff. But then there are the bigger issues, like there's a lot of embedded bias in it. Like you know, that CEOs uh of these companies and you know, especially thinking about Grok have dictated how they want these bottles to answer, right, certain questions. So there's biases baked into it. And also the in the the uh average person, if our lives are meant to be impacted by AI, we should have a right to say how this tool is being used. So the nonprofit started as an organization that would try to cultivate and get people excited about evaluating AI models. The for-profit is specifically looking at how to build the infrastructure to do this. So things like algorithmic transparency, technical methods of evaluation. One thing I want to say, like to get a little bit a little in the weeds on it. So machine learning and AI, like narrow AI, like pre-generative AI stuff. Those are like largely statistical models. And as a statistician by background, like we know how to math those things. Like we have over a hundred years of mathy mathing to figure out things, right? With generative AI, you have probabilistic outcomes. And the way I describe it to people is 2 plus 2 sometimes equals 3.9, sometimes equals 4.2, usually equals 4, but sometimes equals 98, right? So you don't have this consistent answer. So in trying exactly and then in trying to f trying to evaluate this, it is hard to make a test that is scientifically sound, something that's reproducible, something that's generalizable. And these are all things we need to know if this model is gonna work or fall apart. And we don't have that yet. So the the for-profit, the it's it's it's structured the public benefit corporation for many reasons, um, is dedicated to creating the infrastructure. So the nonprofit creating the community and then the for-profit creating the environment that they can do these tests on. I'm looking at the Humane Intelligence nonprofit webpage and uh you talk about AI red teaming, uh which is so important. But instead of having it be done by the companies that make the models, having it, I presume done by a community of people. AI contextual evaluations. What's that? Evaluation is actually a phrase coined by my colleagues uh Reva Schwartz and and Gabriella uh Gabriella Waters. They were both actually previously at NIST and now run their own consultancy called Civitas. Contextual evaluations really mean how do we how do we give uh you know a test of a model that understands the context in which it will be used? I don't mean to use the the the word in the you know the the the word and the definition, but you know, so for example, if I am a car company and I want to understand, I want to build a voice-activated AI system in the car to help people whatever, get directions or find the nearest gas station, how do I do an evaluation of that? That's not just some sort of a generic evaluation. So things you might want to think about in that situation, how does the AI give an answer that will be correct and you know not lead somebody to an unsafe place or distract somebody when driving, right? These are very specific things that today the very generic and superficial testing tools put out in Silicon Valley really don't answer. So don't answer their questions. I do a lot of work with companies. And these are all not tech companies. These are companies trying to use AI, you know, banks, insurance companies, et cetera. And zero of them have told me that they have found the tools being built in Silicon Valley to be to be useful for them, and they just do all of their evaluations in house. They they tried they try their best to do it themselves, which is not that's not a s it's not a a a formula for success. It's uh so they feel that there's risk. I mean I y y you have AI red teaming, AI contextual evaluations, uh a bias bounty, which are I presume uh challenges to find bias in these AI models. That's right. So all of this really would be under the rubric of AI safety, yes ? Yeah, so that's a that's a tricky term. Yes, it is a tricky term. Yeah, like well, because there's a lot of like, you know, like it in the family fighting of responsibility, safety, governance. And you know, sometimes the the the word I don't mind the word safety. I think it's fine, but for some people it's coded as existential risk, which means there's a community of people that say, you know, AI has a 25% chance of killing us. And again, this like it very much anthropomorphizes the AI, uses language like manipulation. It talks about things like bomb threats and scenarios. And frankly, uh from my perspective, I think sometimes that narrative is somewhat intentional, somewhat naive and privileged, and distracts from the real harms we are seeing today because we are busy speculating on future harms that are not possible to happen, right? So today what do we have? We have algorithms that deny people jobs, that you know, uh, you know, um unfairly um you know uh accuse them of crimes that you know uh are used for surveillance like we know those are actual harms that happen, and instead a an overly significant part of this community funding, brain power policy is spent spinning on terminator stories of AI's gone rogue. Yeah, we've said this many times. This is Jeff, one of Jeff's favorite uh uh drums to beat. It actually is it is the flip side of the coin uh moral outsourcing. So on the one hand, you you say, well, it wasn't us, it was the AI, and the other hand you said, but this AI could kill us. It's all kind of the same same uh one thing you say, Ruban, that I think is so important is is that when you blame the AI that way, you take away our agency, which comes back to what Father Balliser said, right? And and and it acts as if we're powerless, that the AI is just going to take over everything and there's nothing we can do about it. And these companies want it to be that way. I'd love to hear you riff on uh the notions, the hubristic notions that they add of general intelligence, superintelligence. Uh uh it's not enough to say that you're as good as humans. Right? Superhuman intelligence, super superhuman, ubermensch intelligence. I don't know. Bingo. It's exactly where it goes. I I I said I happened to send you and I also sent Leo a paper this last week that uh Yan Lakona was one of the co-authors arguing against this notion of generality. That humans aren't general, that we're good at some stuff and crappy at other stuff. But this idea that that these people are so smart they can build the machine that is smarter than all of us. Is that a new plateau in this notion of intelligence as privilege and power? So both Jan and Feifei are Feife Lee uh has raised money for her startup and and Jan is also doing something similar. And I think this is what Sarah Hooker is doing as well. Sarah just raised fifty million for her startup called adaptation, which I cannot claim to know anything about, but it sounds like what Fei Fei and Yana have been talking, which is building world models, right? So their argument and Yana's in making this I love Yana's hilarious. Like I think everything he says is always correct and he is not afraid to offend people. And and like when I say offend people like, you know, fight the powers that be not like us. Uh but he's always very, very correct in what he says is that, you know, th that there so there is a belief in the general public po populace that these models are just linearly improving over time and actually they're not, right? So the newer versions of chat GPT are better in some ways and worse than other ways than previous models that came out. So it is not true that models are simply linearly exponentially improving, and all you got to do is give them more data and you know uh more energy and dot dot dot they'll solve all of our problems. So he argues and Fayfe argues that you need world models, which are AI models that do more than just absorb specific language knowledge, it needs to understand the world around us. This could be vision, it could be voice, it could be like lots of different things. So I don't know, we don't have a world model yet, but this is this is what they're betting their careers on. And given that, you know, they are a quote godmother and godfather of AI, you know, I I I I figure they know what they're talking about. Uh, so I I find that very interesting. And and I I was on this like debate show last two weeks ago, um, discussing whether AI will take our jobs. And that was the point that I made that actually these models are not simply linearly improving. And we may have actually reached pretty close to saturation at the capabilities. And right now, really, what everybody is doing, and the the bait and switch that's happened in the last year is actually that the models have not been improving. What's been happening is they have focused from building like foundation models to building applications. So you may see if you have Google, right, all sorts of new AI stuff dropping every week. So yes, they're building Gemini, but now they're actually saying let's take Gemini that exists today and like build these little tools, which is not necessarily a bad thing. But again, this is not this world of super ubermensch intelligence that's gonna sit at your desk and drink your coffee and take your job. That is a very, very different world we're talking about here. That's kind of what uh um Jensen Walling was talking about on uh Monday, right? Uh Jeff was that we've moved into the age of inference, that we we've moved away from the age of uh building models and now it's about what the mod els can do. It's uh you've said uh Rahman that part of the problem is that the the people in charge of all Government, which you say doesn't really have the tech and you've worked in government, so you know doesn't have the technic al capability to understand what it's what it's doing and when it's regulating this. And then you also point out the that we in the public really don't have any way to measure any of this. It's it's a little bit of a black box for us. What's the solution to that? There it sounds like nobody knows what's going on. Or nobody's incentive to do anything about it. Maybe that's better. Uh yeah, well, okay, so I I can I can talk all day about policy. It's funny because I was just talking to uh I talked to a lot of policymakers and I'm I am very heartened to see a lot of young, these would be like older Gen Z, uh, you know, people interested in running for office, specifically on a tech platform. I think, you know, Mam Dani has really emboldened a lot of people who want to see positive change. So there are a lot of junior, but you know, they will be the next generation of people. And they're all all all of their heads are in the right place. Like so I'm very heartened to see. They may not have the wisdom or maturity yet. They'll get there, right? like I would say five to ten years, which you know may be too slow, a sea change happening in DC that I think will will in some ways be quite positive. But you know, one of the things that I my kind of pie in the sky, like what is this ambitious shoot for the stars thing? Is I I gave a Ted, I gave my TED talk on this idea of a right to repair. And right to repair, especially for like the reason I chose that phrasing is it really appeals to kind of like the old heads, right? This idea that if you own a piece of technology or a piece of technology influences your life, you have a right to tinker with it and do stuff to it. And you know, the right to repair actually is more about physical devices like iPhones and McDonald's soft serve machines. Um, but I do give the example of AI tractors, John Deere versus farmers, who actually learn to work with hackers and hack into their into their tractors because John Deere required that you work with a licensed, you know, per technician from them. And again, this is a community of people who are used to just tinkering with their own stuff. But you know, it they can't wait three weeks for someone to show up. Crops grow when crops grow, right? So this was a fundamental problem for them. And I think we all like it, we all need to think about like what are our rights as people? And it was it was sort of meant as a thought exercise. We've never had technology framed that way to us before. You know, when I was at Twitter, we did this, you know, we did this exercise where we wanted to understand what would it look like to give people more ownership of their timeline. And we I worked with um Dr. Sarah Roberts, who's the author of Behind the Screen, which was the first book that exposed content moderators and and and all of the the horrific things that they have to see and do just to make sure we get a sanitized internet. And we worked with her to really understand how people feel about agency and ownership. And and like the TLDR is that like everybody said they wanted agency, but nobody understood what that looked like. Nobody could tell, no one could articulate what that meant. And to be fair to them, we have never been given that. We have never been given ownership and agency. So what does it look like to have a right to repair? I'm not sure if I know, but I think a starting point is something like public red teaming, right? We're regular people. So going back to the red teaming, you know, we we do we purposely do these exercises with teachers, students, policymakers. Like the point is not AI experts in the room. And it's it's to break down that initial barrier people have when they say, oh, I'm not an AI expert, great, but you're an expert in being you. You're an expert in being a teacher or being a multilingual sociologist or a cultural expert. That's what we need more than more tech people in the room. So that's like a starting point. A bug bounty for uh social harms kind of well exactly and that's that's kind of what we did with NIST. We did a project with NIST called ARIA. And what we did was ask literally anybody in America to go onto our platform and evaluate gen AI models, and that information went to missed to inform their the their their standards development. And when I say that like I literally gave it to the guy who manages my gym. And by the way, he was super interested in it because he's like, hey, I have like a side hustle where I make websites and I'm really worried that AI is going to come take my job. I really want to do this. So when I say everybody, like we and the thing is like that's the dirty secret. Everybody can interact with it, but this like this mythos around it, this like we're too smart for you and the technology's too like it's all on purpose to make us not feel like we deserve ownership. Yeah. Pay no attention to the man behind the curtain. Robert, you you wanted to say something? Yeah, I I was wondering. So uh back in uh twenty twenty three at DEF CON in the AI village, um two things that really stuck me from the final analysis that came out of the event was the uh first the recognition that you had that sometimes closed models are required for security and intellectual property, but that the creators needed to provide transparency on capabilities. And I'm wondering how much of that you're seeing. actually see the the creation the creators of these foundational models explaining what it is that their they want their model to be able to control what they want it to be able to do. Th the second part was and if I I I'm sorry if I'm not remembering this correctly, you were talking about the democratization of desirable behavior that that was absolutely something that needed to come out of the red teaming. Uh we needed to be able to to get together and in making policy decide how are we going to regulate the the reward behaviors of these models. How much progress have we made since twenty twenty three? And does the Anthropic Soul document do the job? Yeah. No, so I'll work backwards. No. I had a feeling you might say that, but uh since yeah. Uh you know, I'm I am very cynical if you've not gathered uh at the intentions of the people who simultaneously are going to be billionaires in building the technology and yet you know proclaim to also be the public philosophers who will cure it all and save humanity. I'm like, okay. Uh so you're gonna point out the problems but not do anything about them. Um but y but I I wanna talk about your your question. So first is, you know, this this idea of uh closed versus open, this is one of the reasons why we need an independent community of evaluators, right? So think of literally any other industry that is impactful: finance, education, airline safety, they too protect intellectual property, right? But if you are a licensed evaluator, let's say a financial auditor, right? You have this license you get, you have professional standards, you are allowed access to things that a regular person off the street would not have access too, you have guidelines in which you can do this testing. I mean we do this in healthcare as well, right? So this is not a completely on like tech likes to think, you know, we're this is the first time anyone's thought about blah. It's not, right? So we have done, we've created institutions, professions, and systems in place to protect IP while also enabling independent evaluation. This is why that independent community is needed. I think the public red teaming is a great tool for awareness raising, for people to get demystified to learn how the technology is using, but working, but if we want to talk about improving these models, writing good regulation, really understanding performance and harms, that is like a different animal. And this is again like with the for-profit, why I want to build this infrastructure. We need people who are skilled in doing this. We need a way of you know understanding their expertise and you know getting giving them access to and this could be legal protections, legal access to it. It could be, you know, professional certifications, but like this is why you need the the profession. And then the second of you know democratizing uh sorry, what was the phrase again? You said desirable behavior. Yes. So one of the uh I cause you can tell none of these people care about philosophy or social sciences or anything like that. Right. There's this like very arrogant notion that they can arrive at like this universal good. Um, and I I always find it really funny when people are trying to make these models and they claim it will, you know, have this constitution or or values, a universal values. Like I have actually heard people say, like, oh, obviously we all believe X. And like we actually don't all believe X. It's actually very, very hard, slash impossible to come to even if you think about the most fundamental universal value, one might argue, which is that okay, well, human beings say we shouldn't kill other human beings. Don't we though? Don't we have the death penalty in the United States? We do have state-sanctioned killing of human beings. We've actually said it's lawful and okay. So we don't universally think that it's wrong to kill other people. And one would argue that would be the most fundamental thing, right? The most fundamental thing that we could theoretically say we universally agree on. And yet we don't. So yeah, there is this arrogance of this idea that we can come to universal uh you know, this universal list of values. You know, one of the things I I love to laugh about is one of these benchmarks is called humanity's last exam. Yes. Uh what a dramatic but you look at it, it's a bunch of like physics and math questions, and I'm like like opposite out of humanities last exam. Like it's real tech pro, isn't it? Yeah. Yeah, like like really is that and by the way, there is this section there was this counter paper by a bunch of people that you know was trying to make a benchmark for quote universal values and I want to like I remember like the first thing I ran by I I went to was what what what did they consider to be global historical knowledge and it was Europe, America, Asia and Other. Cool, cool, cool, cool, cool. Other. Yeah, yeah. If you live in other you might not agree. America, which is the youngest of all of the nations, gets its own. Right. But uh yeah, so like this is this is what these people come up with. We're really glad that we could spend some time with you. I wish we had uh more time. Uh Roman uh Charduri, I look forward to your book, but I know you have fifty thousand words to write by August. So I I'm a connoisseur of the of showmanship of it. Um every time he does it. And we're gonna talk about it later in the show too, of course. Yeah. And so at the end, he went gaga over open claw. And um I'm curious to hear about that. But but I was thinking about this. In my world in media, we went from a world where you couldn't make media unless you had the tools of production and distribution, unless you had the capital, unless you had the equity to do that. And what the Internet has obviously done is it means that that we can all entertain ourselves, we can all make media, uh the culture makes itself, fashion determines itself. And I celebrate that immens ely. For all its f ha harms, I think it's better. Um the internet ended up being uh top-down in a lot of ways, corporate, right? Just as happened to media. Uh along comes AI. And your co-author in the series, uh Charlotte McElwing, surprised me with a with a surprisingly optimistic view, having written the book Black Software about the oppression the technology caused in black America, he sees an opportunity to break out of that. Now, so I'm finally getting to the point of open claw. Does this mean that that um we can all make technol just as we can all make media on our own, we can all make creativity on our own now. Um is it possibly to not be overoptimistic that this opens the door for us all to make our technology now? Does it does that is that a step to give us all more agency? Even though the models have to be made by the big boys, and they're all boys, except Fei Fei. Um but is there an is there an opening here that the technology gives us the chance to take it over ? Well, and this is where right to repair comes in. I fully, I fully agree with you. Uh I think that that is not that has to be intentional and it needs to be because the tech companies will not frame things that way, right? So they will cause they don't want that. We need to do that for ourselves. And j just as an example, my partner has been messing with us uh making like an IoT system for our house, but one that's done in a way where we're locally hosting all of our own data so that we're not sharing information to we don't have like a ring doorbell, but you know, like right now we have let's say like simply safe, right? So instead of that, and and the thing is all of the tools now exist to do that. And my partner, who by the way, is an architect and not a programmer, but who o has always had a like a passive interest in like IoT and automation and you know out of necessity because we move we were we move around a lot, we have it, we are actually able to do that today in a way that we were not able to a few years ago. So that just as one example, but but again, like no one's going to sell you that. Right. So either we have to raise awareness among people that you can go do this or or create a counter movement to provide that service and give people that world. Because like I said, like AI and this new wave of technology was not given to us the way the internet. The internet was given to us as a tool of free use and democratization. Algorithms and this new wave, they get savvier and savvier every time. They consolidate more and more power and wealth, and they're not gonna give that up randomly. We can make a counter movement that maybe is designed around things like right to repair, where we can just do stuff like this. I was telling her that like she should build a a a side hustle. I think it would go over well in places like New York, right? Where you just do this for people. Somebody pays you a bunch of money and you're like, I'll buy you a server and a bunch of Raspberry Pis and set up a dashboard. And like, there you go. You can monitor your whole house and not not a one bit of that data is going to go to OpenAI or Amazon or anybody else. Leo, that's your new business. Maybe. I love it basically that your it sounds like your general argument is for human agency in all of this. Not to let the companies that are creating this stuff take that from us. But in fact, and not to assume that it's a black box that we cannot have any understanding or agency of, but in fact to take that back, just as we have with right of or trying to with right of repair. Um is that f air? Yeah, absolutely. I I think paramount to all of this that I think is important is the ability for people to choose their path in life. Like I may not agree with the way somebody maybe somebody does want to give their data to Amazon. I don't know. I don't care. But like we don't have a market with choice right now, and our choices are getting fewer and fewer. I want to create a market where we actually have choices that we can act on our values because that's what a lot of people are expressing. They have particular values about their personal information, data, even passive data, like your ring doorbell, and how that's being used in ways that they are not okay with. R ight. Uh I look forward to your book. You better get writing. Well, I really look forward to it. Your editor is sitting right here and uh he seems nervous. So no no. This is very exciting. Tell us again the name the uh working title. I you can change it. We're not requiring uh Oh, I don't know. I don't I I really don't know. Maybe it's something like the the new intelligence, critical thinking and cognition in the AI era. The other one I've been working with is Measuring Minds. Um that is the title of the first chapter because that's what that's what all of this attempts to do and do and do poorly. Yeah. Our our publisher, uh, is very good at titles. So he'll have a he'll have a looks like I will I'm actually very very bad at naming things when I I built the first enterprise bias detection mitigation platform and I just called it the fairness tool because I'm like it makes things fair so thank you for the work you did at Twitter. I'm sorry that Elon didn't think it was important. Um but hey, you know it's worked out well for you, right? You don't you're probably better off, to be honest. Uh thank you so much for uh being so much for Ramon Chaduri. Thank you for having me. Really look forward to the book. We'll have you back when the book comes out. That's what will happen. Yeah. Yeah, if not sooner. You have one definitive future reader. Yes. And really, I really support uh what you're saying, which is we we we need to fight for our own agency in all of this. That's uh clearly not the the we can't let the frontier labs and the hyperscalers dominate this just because we don't I don't understand it. It's not it's not gonna work. It's not enough. And government reality too. Government ne isn't the solution either, unfortunately. Maybe it will be in the future with a younger crew, but not now. Thank you, Ram on. We will have more of intelligent machines in just a bit. Yay. Lovely. Is is this what you've been doing on the show? This is this is excellent. Oh, you haven't listened. Okay. Do do you want to hear like a an interesting story that my friend Serafina told me? Uh she's the head of the podcast. Before you say it, I want to let you know we're still on the air. Oh. It's not part of the podcast, but we stream live. No, no, it it''ss it's no totally it's actually it's it's a it's an interesting story. Uh and it's gonna be um at some p place in my book. It may actually even be in the introduction. So uh you know, we worry a lot about young people and over reliance on technology and cogn in critical thinking, et cetera. Do you know that Socrates and Phaedrus was very, very concerned with the advent of writing? Yes. Because to him, memorization was what was the mark of intelligence. And he was concerned that all of his students would become stupider because now we have this thing called writing and we have like freely available paper and they're no longer going to memorize. So it's interesting because again, like as I work on things like AI and education, people like spout their fears about critical thinking and over-reliance. And you know, as somebody who measures things, what I think about is well, how what what is the that means the existence of over-reliance presumes the existence of the appropriate amount of reliance, which means there's under reliance, but nobody can tell me what a like appropriate reliance is because they benchmark it on themselves. But like our parents all told us we watched too much TV or sat on our computers too long, right? Like, you know, like any of us who have kids probably told tell our kids they're on their phones too much. You know, like that is just that that is just, you know, parent to child, like how how that goes in. We all worry the next generation is getting dumber. And maybe they are and maybe they are. And or maybe intelligence is just shifting, right? Because that was I know this is Socrates here. This guy knew what he was talking about, right? I think Socrates is right. You should start memorizing all that stuff right away. Stop Stop stop writing. No computers, no phones. No more writing. Exactly. How many people don't know their own phone number because you don't need it. Right. Or I don't know. What is a phone number? We number phone. Scientific American has a very good piece today. Uh by the way, just just uh uh on the it's the kids today uh arguing against that and saying the kids today are in fact in good shape and brings data to it. Good. So I hope that's true. I hope that's true. Take care, Raman. By the way, love the hat. I was looking it up. It is not the IEEE uh I S T O. It's actually a Portuguese um. It's uh it it's well it's uh it's a sustainably sourced B Corp in Portugal. Um, and they make amazing organic cottons and linens, et cetera. So I I wanna support a a local sustainable business and my I will quit tech and do something else job would be to open a textiles shop in Lisbon. I have found joy in tangibles the more I work on intangible things. You know, people are out here saying they want to open a bakery like a I am not waking up at 4 a.m. to make croissants and B I am not dealing with the 9 a.m. coffee rush. Absolutely not. What I'm gonna do is open up a shop where we sell beautiful linens and cottons and fabric and you know ceramics and only the people who I want to come in will come in. But I just want to let you know if people ask, you could say it stands for the Industry Standards and Technology Organization of the Hydropology. I can. You could. Or I can get people to buy organic sustainable cotton. Thank you, Roman. Take take care. Thank you. I looked up ISTO and that's what I found. Wrong ISTO. Wrong ISTO, exactly. Exactly. Um we're gonna do an ad. I've been installing Nemo Claw during the interview and I'm ready to uh load the claw. Oh yes. All right, guys. I'll see you later.. Thanks for having me Pleasure to meet you. Um very interesting stuff. We'll have more uh intelligent machines. And our uh special uh guest, Father Robert Ballis, there, filling in for Paris World in uh just a little bit. Our show today brought to you by uh my domain registrar, spaceship.com. Spaceship.com slash twit. When we uh when the remember Paris wanted to do a website secretly British and we registered a domain, secretly Brit.sh. Well, I did it at spaceship because it was so easy. Plus, we had searched around and it was also the best price. If you've heard us talk about spaceship before, there's a reason it keeps coming back. Spaceship is now really the one of the fastest growing domain registers in history. It's because Spaceship is rethinking how people register and manage domains. Its fresh approach has now led to six and a half million domains under management in record time. We just started talking about them a few months ago. That kind of growth comes from, well, I guess giving people what they actually want at a fair price. Spaceship offers transparent, low pricing on domain registrations. By the way, if you're somewhere else, move your domains over. Their transfer pricing is fantastic . Their renewal pricing is fantastic. Uh, this means there's more clarity over what you're paying for over time. It's so often the case that you it's a dollar for the first year and it's a thousand dollars for the second year not at spaceship.com. Alongside great value, the platform is especially built for flexibility. You can instantly connect your spaceship registered domains to spaceship products. We clicked a button and secretly British had an email site. Uh you you get web hosting if you want. We haven't she hasn't set up her domain yet, so I pressed a button that connected it to her existing domain. But when we have a website for it, it'll be very easy to do hosting it on spaceship. That professional email is first rate. Even virtual machines. So a great place to host your open claw, for instance. And you can build and test before committing because almost every spaceship product comes with a 30-day trial. But if you prefer third-party tools, don't worry, no problem. Just point your domain to what you need by updating your DNS records. It's easy to do or name servers. And actually, they have a nice little AI called Alf that can do that for you. So now you have the freedom to build your stack exactly as you want, because they know this is what we geeks want. It's basically the best of every world. Visit spaceship.com slash twit to learn more. That's spaceship.com slash twit. We thank them so much for their su pport. Um open claw. Let's see. I installed it. I needed Docker. During the uh we were watching the keynote on Monday, you, me, and uh Micah Sarg ent, Jeff Jarvis. The NVIDIA keynote. What did I say? Just you just said the keynote as a fancy. Yeah, right. You know what? And I will stand by this. I was as I'm watching Jensen Wong Masterfleet spend two hours and some minutes describing all their products. I said this. There is no CEO in technology today that can kiss the hem of his robes. Has the mastery of his topic. Yeah, and boy, is that company uh doing all the right uh things. So one of the things he talked about is the fact that uh OpenClaw is the fastest growing open source history uh project in history, um more stars than Linux in just a few months. And he said, and so we're going to support this with a uh enterprise focused safe open claw using something called OpenShell. It's installed a bunch, you can see my screen, it's installed a bunch of stuff here. Open shell, CLI. Uh it's apparently uh I don't have uh it says NIM requires an NVIDIA GPU. Oh of course but we can use cloud inference. Well you can buy that now. Th'eres a new Dell machine that has the uh small cost. And now I have clicked the link. It does say security risk 'cause it's HDTP. Whoop s. This is it? What am I seeing? What is this? That's not I mean let me go back to local host. Hold on a second. That's weird. Is that what they wanted to show me? He says warning. Oh I pressed go back. No no no. I want to go forward. Accept the risk and continue. And now, ladies and gentlem en nothing. Excellent. I was gonna introduce you to my new Nemo Claw. Well go to advanced or go to view certificate, right? No, no. This is it. Accept the risk and continue. I can view the certificate, the open shell server. I I think it's running in my doctor Docker. So anyway. So what makes it safe, Leo? Explain that. It's running in Docker, which most people recommend you do with OpenClaw anyway, because if you're running in a virtual machine, you're a little bit safer, although Docker can be misconfigured easily uh to be not safe, right, Robert? Um Oh yeah. Oh yeah. We just did that last week. So yeah. Oh yeah. Oh yeah, absolutely. Um so that was one of the things they call it open claw with uh guard rails. Um I think this is you know we've said this before open claw showed that this is the era of uh the year of the uh of the agentic AI. In fact I thought that was the thing that was very interesting. And it was a good thing that he he b bought into Grok with a cue. Oh yeah. Those chips now. Those chips very important to what uh they're doing. You kind of rolled your eyes, Rob ert. I I mean yeah We're still building models, right? We're still building models. Yeah. I understand why they want to though because all the new money making applications seem to be in inference. Oh. Uh that that was CES. CES's AI booth and the the part of the West Hall was all about inference, uh, using inference in driving, using inference in home appliances, using inference in security. So yes, I get the push because they see that as an untapped market. But um if if you look at the sustainability of the current inference model, it's not there. I I don't think it's nearly as profitable as it they think it's going to be. Jensen did take a little bit of a victory lap. This would be the picture. This is this is the picture of him with his uh WWF uh belt or something. Uh he said, and this spiked the stock briefly until people thought about it, that NVIDIA was poised to sell a trillion dollars worth of Blackwell and Vera Rubin chips next year. Trillion up from five hundred billion. Is inference just another word for application? And does this mean that this is an effort to get the the industry going into retail channels? So uh yeah, so when we talk about the foundational model, we're it's that's the traditional we're gonna shove a bunch of data into this training and then we're gonna get something out that we can use. The inference model is sort of continuous training. So we we now deploy a foundational model, but it starts to learn through its interactions with the real world environment and it goes back into the training. So one of the hottest things in AI right now is basically an inference machine, right? Yeah. Yes. Actually one of the things I've been really thinking about uh lately, I' m Don't do that. No, Leo, no. Well actually my jump. The reason I started Twit in the first place, the whole point of this was to for me just to do the stuff I like and have initially other people do the stuff I didn't do, like edit the shows and you know produce the shows and technical direct the show. I just want to walk my goal was from day one, twenty years ago, just to walk in, sit down at the microphone, turn it on, do a show, get up and leave. Be done with it. But instead I've created what Corey Doctorove calls a reverse centaur, which is basically AI is making it h more work for me, not less. So he talks about a centaur like like a computer is like a centaur with a human machine beast instead of a human and a horse beast, it's a human and machine beast where the horse does the carrying and the human gets to r you know be on top and look around. And and it so the work is being done by this the bottom part of the centaur. A reverse centaur, the humans doing all the labor, doing all the work, while the AI is sitting there looking around. And I kinda in a way created that with my workflow because now I have to spend hours every day going through stories. Admittedly, once I have the stories, it generates the rundown and does a lot of the you know the the busy work. But I realize the piece that's missing is I want it to in this is the hard part, somehow encapsulate my editorial judgment. Now, in the past, you would train a model maybe for that, but I'm thinking I can create a small language model based on a bigger model. The bigger model has all the language capabilities, and in the small model train it to have my editorial judgment. You think that's crazy, Robert? It's not crazy. Uh I see an exceptional expansion of requirements for power and uh other resources because if you are using a small model and then training it with inference, right, you are constantly going back and you have to retrain, retokenize your data, otherwise it's not really truly learning from the the inference data that you're giving. Oh yeah, you're right. Well uh one of the things that we were looking at uh uh doing is using uh some sort of maybe uh Bayesian system or something to train it using articles I didn't choose and articles I did choose. I have a now a pretty big database of articles I looked at and didn't use and articles I looked at and bookmarked. And it that could train it. Actually uh Darren Oakie suggested uh uh something kind of an exotic technique that he says is working really well for him. Um something called what was it, S S L M? Do you remember SL V, I think it was? Um SL V Some sort of S V He had tried uh Bayesian and other uh statistical tools. Linear SVM. Same idea, I guess. Anyway, I'm gonna try I'm gonna play with that. But the point my point being that's kind of inference. That's like I'm not gonna train a big model, that's already done. I'm going to I know it's not exactly trained, but it is a way you're not. It's a never ending But that's fine because it's always gonna get better. I mean it would end eventually, I guess, if it somehow said, Oh, I get it. Leo, you just like this kind of stuff and don't like this kind of stuff. Every time you tell it what you want, you are training it to know what you want. Right. Correct. And at some point even it's possible to conceive of a a time when it's done. But maybe not. One thing I do like about the inference model is it lends itself to local models. Exactly. Cheaper because cheaper local models. Right. Cheaper, exactly. I mean the like the reason why it was so hard pushed in the automotive section is because they were saying look, we want to create a model for full self-driving, but it learns your style of driving. Exactly how you drive, not just how everyone drives. And we don't want your driving to affect the model for other drivers. We don't want to because right now the test Exactly. And he rolls through stops. That's not that's not good. Actually, you know, BMW has announced this new cla Noi class uh model and their new I3, which they just announced yesterday. The,y say they the whole point of the self-driving is we're going to learn your style. So they're on top of this. What if you are a bad driver? Well to learn, well, they are a bad driver. Correct for you? This is not the same thing. But perhaps you're more aggressive about lanes changes or less aggressive. than I would be. If it would learn No, don't change if there's somebody a hundred feet. Yeah. Um Did did uh Jensen Wong's announcements about yet more auto companies he's working with on self driving, does that torpedo Tesla and Mus k to a great ext ent? At this point, you can't really torpedo Tesla and Musk because they are in such a They're self-torpedoing. Yeah, they're in such a bubble that they can torpedo themselves. That's about it. Only they can torpedo themselves. That's right. Yeah. Uh yeah, Grok which was a multi billion dollar aqua z Aqua hire really. Um Grok with a Q. Since we just talked about Musk we made the Q the Q for NVIDIA. U Uh is uh is uh is a server chip. They licensed the technology, they didn't actually buy the company, designed to make AI servers more cost efficient for things like AI coding, for inference in effect. And the Grok system will begin shipping in the third quarter of this year, according to Hu ang. And it's going to be made by Samsung, which was kind of a surprise. I thought that NVIDIA was a big TSMC client. I think they still are, but Samsung. They're just maxed them out. Yeah. Samsung's going to be making uh these. It's not a GPU. Grok integrates memory onto the ch ip. Um it's really built to do this kind of to speed up this kind of communication within. Ye ah. Uh then the other thing they announced, DLSS five, which really I don't think is important. No. But it's ugly. It really made people upset. Uh well certain people who make certain things. Gamers really didn't like it. Uh the idea was it takes existing assets in a game and loc ally you know pretties them up somebody is how much people loved it oh you think it's like it frame interpolation basically yeah that's the same thing it's it's it's creating something out of limited information. So maybe you get a couple of frames that look great, but most of the time you're gonna be going, uh meh. Hey, this is Benito. It's uh my problem with this is that it changes the art direction of the game . I think here's an example uh that I think is uh g quite good myself. This is me. Actually I think this somebody generated this on the disoks like you as Tom Cruise. Yeah. It's a fun place. It even made my eyes blue. Yeah, you see? That's exactly how I look. Can you do a dissolve instead of a jump cut? I think it'll be a little more morphing morphing. Morphing into into uh uh well so uh and Jensen Wong was a little actually pr a little pi pissed, yeah shall I say uh at at all of this. Um he his reaction was well you just you just don't get it. You don't get it. Tom's hardware asked and Paul Alco The reason uh for that is because as I have explained very carefully now I don't I haven't heard the recording, but I can see him saying this DLSS5 fuses controllability of the geometry and textures and everything about the game with generative AI. Oh well in that case, no problem. Uh you know what I I think Andy Anthony Nielsen, uh our own uh Anthony Nielsen got it right when he said it wouldn't have been so upsetting to people had he shown it with the backgrounds instead of But really what bugged people was it sh he showed it with people. Well what so so Benito, is it is it your fear that it takes away artistic agency that it's a good idea. Yeah, it changes the graphics. It changes the graphics in a way that Don't don't screw with what I made. Well, the people who would use this, I presume, are the game companies themselves, right? No, that's happening. This is so that the companies don't have to do this themselves. So that the DLSS has it traditionally been something you would turn on as it's like ray tracing it's something you turn on. Yeah, yeah. It's enabled. I don't know if the company Show show my uh screen, because this is uh some examples. It's not always by the way, beautification. Here is uh from Hogwarts Legacy. It's turning uh that older woman into a older woman looking woman. It's adding lighting and shading. It is changing the look a little bit. Um I don't know. I doesn't bother me as much. Gamers historically have really been negative about AI in general. They're persnickety as a bunch, yeah. Yeah. Yeah. It doesn't bother me as a gamer, it bothers me as an art ist. Yeah. But if if a game company uh, you know, can use this to make their stuff look real. Yeah, I mean that's pretty as the artist you can use this to to make it more realistic. What's wrong with that? Ye ah. If it's in your control. Anyway, it this is one of those demos where we'd have to see it anyway. We'd this is just a video that NVIDIA created. But it probably got more attention than anything else Denzel Wong mentioned. Well in certain in certain corners. Certain circles. Ye.ah Well, I mean, I'm I'm I'm willing to test it out in about five years when I can actually afford to buy one of their new problems. So yeah. Nobody can afford this technology. Trevor Burrus, that was the uh DGX Spark. Uh they've they announced that a number of third party uh OEMs were going to be able to make these Spark based on the world So do you have the sin of uh of of Envy and covetedness uh you want you want to be able to I have that sin in spades, my friend. I mean I got one downstairs. Oh I forgot. I forgot. Robert has home from CES. Yeah, I forgot. You brought one back. Yeah, yeah, it's yeah, it's it was swag. It was boot swag. Yeah. Was it under your seat when you were on the way home? You get a spark and you get a spark. Honestly don't want uh to have to need a five thousand dollar piece of hardware to do this stuff. And that's why I bought the framework, which was expensive, three thousand, but it had a hundred twenty eight gigs of RAM as a Strix Halo. I'm interested in models I can run on that. That's where the retail level excitement comes. Do you know what a DGX Spark is going to cost from Dell? I mean they didn't well uh it's around three to three to five thousand dollars. Okay. Yeah. Um and and they'll be um uh I think uh Darren's was ASUS, if I remember correctly. There'll be uh some several OEMs will make it. Supermicro had one that was the ugliest thing ever. It looked like a tower. It's like you don't need to be it doesn't need to be a tower. Yeah, he got the Asus uh GX 10 ascent. And have the firepower to basically go coal hog on it. So what does that let you do that you wouldn't have done before I mean well first of all, out of the box, it anything artistic, anything you want to do with video or photo, that's that's a no-brainer. But what we've been doing it we're using it for is for translation models because we deal with a lot of uh languages here, and at the same time to do summation of the conversations that are having happening in different languages. It's extremely effective at that. And they I will I will not like a frontier model to do those kinds of things. We don't. We don't. But it is but we do need the privacy because the conversations that we have here are are closed form. I so we cannot in any way, shape or form use cloud-based infrastructure. This lets us actually do it. Makes sense. Yeah. I think also a lot of us will do a hybrid thing. For instance with Claude, you know, we'll use uh Opus four point six on for the really high end stuff, but we can use Q Quen or uh Kimmy or something else for stuff that doesn't need so much power. What bottles do you use locally, Rob ert? Oh I don't know. I I handed it over to um to our uh IT guys. So he's running all of our models for us. This sounds like running a Sun Microsystems computer circa nineteen ninety five, you know? Yeah. Yeah. In a couple of years it'll be, you know, a lot less expensive. It'll be a lot more affordable. I mean, Blackwell ones. I'm not sure I agreed with Roman when and I wasn't going to challenge her because she's way smarter than I am, but I was I'm not sure I'd agree with her that uh we're flatlining with LLM uh improvements. Well that's the discussion we have all the time. I I don't think that's at all in evidence. It's the argument that that Jan and Feife make is that maybe we're not flatlining, but it will only take us so far. I would point out that they are just as self-serving as Sam Altman. I mean, how much money did they just raise ? 1.03 billion. Yeah. So uh yeah, of course. Oh our way of doing it's much better than what the other guys are doing. Yeah, but they had an argument that a lot of people bought. I I think that um and you even have I mean this is a big deal and not much was made of it. Uh because I went to a debate between uh Adam Brown at DeepMind and Jan Lakoon. It wasn't to be a debate it turned into that overall just this. And and DeepMind was still scale, scale, scale models will get us there. And Demisabas has switched recently and has been talking about the need for role models and that uh LLMs alone won't get us there. Of course defined there is the other issue. Right. More kinds of data, the better. But but I've been thinking about this lately, because their argument is, well, you can't describe the world in text, except isn't that how w uh we work as humans essentially that cats don't is their argument. Uh humans don't slime argument. My brain most in words, right? How do you know the physical universe What if you weren't limited by words ? I often think about that. I often think about that. I have to translate everything into words because that's the way I operate, but a dolphin doesn't. Yeah, thought thought to thought to text is lossy . Sure. Okay. Yes. So now you're saying we can make something that's smarter than humans. Ha! Trap me, did you? So see those words work pretty good. This is the limits of tokenization. And the need to address so much storage at any given time for any given answer, it's However, I think we're going to be able to do Anthropic just gave us a million token context window. Which being able to run that model as fast as we would need to, what we're what we're doing instead is we're creating models that are specifically good at a thing, versus the human brain, which can be very good at many, many things. We can switch gears very easily. Models cannot. And I am by the way, not against the idea of having physics models and as many models as you can. I'm just quibbling with the sole argument, oh, we've tapped out LLMs. I don't think that that's true. I think that's the other thing that paper that I that I sent you, that Jan Lakoon was a co author of, his argument was against this notion of general intelligence. Right. Saying that every human being is good at some stuff and crappy at other stuff. There's no such thing.. And same as machines And it and it has interesting outcroppings as well, because what Lacoden argues is that uh if you think about specialized models, you can also limit the model to what it does. Right. Makes it safer. Makes her safer. No, and that's what I was just saying, which is we've got these general purpose LLMs, but the future lies with special purp ose uh smaller language models, um you know, uh specially trained models, special I mean, absolutely. We're not gonna throw out the LLMs. We're gonna still use those as the base. But I I really absolutely think that that we are going to specialize. I was watching a video uh this morning, uh Australian fellow who uh was a video about small language models 'cause I was very interested in this notion, who uh he's an Australian, he said one of the problems we have in Australia is a lot of sun and a lot of skin cancer So uh he created an iOS app, this is part of for a CACO competition, an iOS app that it it's really interesting. It doesn't tell it's not diagnostic. Uh it you take a picture of something, a mole or whatever, uh you take as many pictures as you want, and it saves that. It does describe it, and then next year you take another picture, it remembers the things you took pictures of and then you can look at the change from year to year. So it is like a self exam that you can then send to your doctor. And that's based on a very a language model that can run an iPhone uh in just about three or four gigs of RAM. Uh and it's just categorizing, not diagnosing. And I thought that was very interesting. That's a perfect example of a specialization. Yeah. Yeah. The first is obviously going to end up better, faster at protein folding because it's not in essence distracted by other tasks. Right. And I think that that makes perfect sense. And it doesn't it doesn't distract uh detract at all from the power of the model. In fact, it's a way to get more Yeah. M poole says in our Discord, intelligence is what you are capable of. Inference is what you do with what you know. These models already know so much, that's why the focus is moving to inference. I would agree. I would agree. And by the way, this these specialized models, that is the inference model. That's inference. Yes. Yes, exactly. Exactly. All right, let's take one more uh not one more. No, no, no, no, no. Let's take another red. And uh so that that's GTC. I was I enjoyed it. I'm really glad we covered it. Jensen Wong is an amazing fellow, and NVIDIA clearly is fine like firing on all cylinders, and they have many cylinders to fire. Um Which he claimed he would have. In a year, I think. Yeah. He said twenty twenty seven. Mind blow blowing. Uh it's nice to have Father Robert Ballas there. Paris will be back next week, but it's great to get you on. You've never been on this show. I think this is a good show for you. Finally at last. Well, actually, weren't you on once when I wasn't here? No. No, I was going to take it once when Leo was going on vacation and then he didn't go on vacation. That's right. So we haven't been together on another show. Oh, I'm definitely going on vacation. You know what I got, actually? I'm I'm I got so one of the things I was very excited about when I first heard about Starlink way back in the day before Elon, when Elon was still somewhat human. Uh, was the notion that I would be finally able to travel and do the shows from anywh ere. And I just I'm gonna order a uh 'cause I'm going to Hawaii in uh in May and I wanna do the shows from there. Oh. And I so I ordered a Starlink Mini. Shirts. I can't wait for the shirts. A Starlink Mini. Oh. You can put it on your balcony. I should be able to do the shows anywhere I can get a clear view of the sk y. It has uh plenty of bandwidth to do the shows. In fact, we often fall back to Starlink uh in the studio when Comcast dies on us. So uh I'm setting up a portable studio. How much does it cost? It's not much at all if you do the consumer version. Uh right now we have a business account which means I have to go to uh Costco or Best Buy and buy it as a I have to wear a hat and a mustache and uh buy it uh and stick it under my raincoat. So yeah, I'm a consumer. Did you know they don't let you take those on cruise ships? Uh for good reason. Yeah. And Wi Fi. Yeah, you don't want to use it at sea either, right? It's not as fast at sea because there are fewer uh downlink stations if you're in the ocean. Yeah. And because cruise ships use Starlink. Exactly. Yeah. But I want to know this. Are you taking Claude along on vacation? Claude's coming. I've already set that all up. Well, I really do want an agent. I really do want an agent. Yeah, you're gonna go crazy. Well so I so my personal opinion on all this I have tried all of these the The latest is the the president of uh Y Combinator, his name, Gary Tan, who just made who's just put out uh his o wn these are s basically, skills for claude. Everybody's done it. There's GSD getting stuff done. There's superpowers. I've been playing with something called PAI, personal AI assistant. OpenClaw is just a variant on all of these. The idea is you load it up with skills and API keys and and and uh loops so they can run continuously. That's just c really it's just clawed. The the whole thing is what people love about this is how good claw is. And then they're just putting plastering layers on top of it. And sometimes I think it really is better just to use vanilla clod. So I think what I need to do is kind of strip it all out, take all that crap out. All these skills and stuff. Darren, I'll keep your improved skill. That's a good one. Darren's skills are very good. Uh but uh strip out most of that stuff. Maybe write a few of my own. Skills are a combination of prompts and then you you can put code in there. It's one of the reasons coders still have an advantage. You can put bash commands, you could put code in. Uh it's a combination of all of those. For instance, a good skill, a skill I want to write is a TWIT API skill, which would be everything that Claude would need to access our API. It would be a first step toward, I don' t know, replacing all the humans. And then um anyway No kid ding. I think I am. Uh no I don't want to replace the humans. I think the humans are the most important part of our whole workflow. Well I had a meeting today with with my colleagues at Montclair State and also at the New Jersey Hills Media Group, which is a small newspaper company whose uh whose board I just joined. And um the the AI genius from uh uh who we ought to have on the show at some point, who watches the show, um, Hi Joe, M. Didas, um was taking them through things they could do. And at some point, there are some writers who aren't good at copy editing, who always make the same mistakes, blah, blah blah blah blah. And the one hand, everybody could use Claude, on the other hand, you could just email the article to a project on Claude with your instructions already there, and it could do it Oh, I could do that now. Right. So really uh I think that's what a lot of this agentics uh stuff really is just other ways to interface with the the brain that is good. Somebody's saying jet Lisa's gonna be jealous that ship has sailed. Honey, I'll be back in a bit. I just want to go up into the attic and visit with Claude. Cla ude. My little friend Claude. Does Lisa play with Claude? She does. I've been I've been working on her bit by bit and It's a menaja claude. Show title. Yeah, I think so. Thank you. Thank you, Jeff. Our show today brought to you by Out Systems. Oh, I love Out Systems for this. They're the number one AI development platform, out systems helps businesses bridge the enterprise gap to this agentic future we've been talking about, where the constraints of the past give way to unlimited capacity and scale. And the thing I love about out systems, they've been doing this for decades. They're not new to the game. Out systems enables businesses to build AI agents that can actually do work, take actions, make decisions, integrate with data much more than just answer questions. OutSystems provides the only AI development platform that is unified, agile, and enterprise proven because they have been doing this for a long time. They started with low code. And now with the addition of AI, they have the most powerful tool I've ever seen. You can build, run, and govern apps and agents on a single unified platform. It's agile. You can innovate at the speed of AI without and this is important compromising quality or control. It's really important in enterprise, you know, that your AI is doing the right thing, not the wrong thing. And this is enterprise proven. Not systems is trusted by enterprises for mission critical AI applications and durable innovation. Out systems is the secret weapon behind the world's most successful companies. And by the way, not just for you know small one-off app s. Out systems works with the massive complex systems that today, right now, are running banks, insurance companies, and government services. Out systems even helps companies with aging IT environments bridge the gap to the AI future without a rip and replace nightmare. To go from, yikes, we need an AI strategy to eh, we have a functioning AI application. And it does it safely. Stop wondering how AI will change your business and start building the agents that will lead it. Visit OutSystems.com/slash twit to see how the world's most innovative enterprises use OutSystems to build, deploy, and manage AI apps and agents quickly and cost effectively without compromising reliability and security. That's O-U-T-S-Y-S-T-E-M S dot com slash T W I T to book a demo. You will be impressed. Out systems.com slash twit. We thank them so much for their support. Of this week in intelligent mach ines. Let's see. So much news. So much news. So much I'm going to skip through the Google. Oh, this is interesting. Meta taking a little left turn. It's kind of a uh maybe more of a U-turn. It's a drunkard's walk, shall we say? Yeah. Uh remember when they spent billions of dollars to uh to acquire um um manis and uh sorry uh a a scale ai sorry about that scale ai um they've ri they're doing a reorg according to the Times of India. Maybe this is suspect, I don't know. Um they uh they are uh reorganizing uh their guy they got from Scale AI, met as chief AI officer, Alexander Wang . He's still there, but he announced the company was going to cut 600 people from the Superintelligence Labs division. Wang wrote, by reducing the size of teams, fewer conversations are needed to make decisions. I think AI wrote that line. And everyone will carry greater responsibility with broader scope and impact, and we'll save a lot of money. Uh the teams include Wang The teams include Wang's Research Lab. The uh the uh applied AI Engineering Organization will also uh receive uh big cuts. This is uh uh Sabah um Amar Sabah's uh team he uh another acquisition a raquah hire um and uh so it's complete reorganiz ation um only two people left when their equity vested in November from Wang's team. So that's good. But maybe we're just going to move some people around. I it seems like meta remember their avocado model, which was gonna be their new big uh replacement uh for Lambda, uh was pulled back. It's not good en ough. It's not good enough. Is Meta the new Altivista? Yeah. They're struggling. But you know, it's interesting to watch all the all these companies, except Anthropic and OpenAI, and I guess Google. Google. Yeah. Journal today had a had a story saying or maybe the Times Google's in the cap word seat. I don't know if that's true. I don't know if uh I don't I don't seat right now. No. Um Google's really doing what uh uh our guest was talking about where they're looking more at applications than they are at big models. They did release Gemini 3 uh Deep Think, right? But they're also like in fact I skipped through the Google section, but they're adding maps stuff. Um they did scrap the health tips because they were getting those from Reddit. Turned out not a great source for health information. No. Well, might be better than RFK Jr., but not much. Uh they are going to do an agent builder for the Pentagon, but it's only unclassified work. No, unclassified, I thought. Yeah, unc that's what I said. Uncle Unclassified., yes. Un non Same thing. So in other words, not not. Not classified. Although you saw that now OpenAI is kind of jumping in the fray. They have not up to now been approved for classified work, but they the Pentagon says, okay, we don't like these anthropic guys, so maybe we'll let open AI into the behind the iron cur tain. I mean, you've got Google does burning billions of dollars for AI. Right, but they have income. They have income, right. OpenAI, no. If if the AI deal doesn't go through, they they die. And actually you could even extend extend that to Oracle. Oracle has bet so much on AI. They're heavily leveraged. If it fails, they lose not just Oracle. Right. But let the Ellison Media Empire crumbles. I do I mean I I agree with you that that these companies and that you throw Apple in there too. Apple, Google, and Meta have other revenue streams, so they don't have to make money on AI right away. But we're not seeing the results. Meanwhile, Anthropic and OpenAI who are running on a razor's edge are are are big leaders right now. Maybe that won't be sustainable. You know, that's the possibil that's probably what these companies think is well, we can sit back. Certainly Apple's thinking that we can sit back. I mean they're just leaders because they're investing in in each other, but like meta threw how many billions away on the metaverse? And it I mean yeah, it's embarrassing, but it didn't they're killing by the way, they're killing uh MetaQuest's horizon world. It's going away. It's over. Well similarly, OpenAI, meta-like, is saying, okay, we're gonna we're gonna concentrate now, we're gonna we're gonna concentrate on B2B, which hello anthropic. Well they saw what Anthropic was doing. They said, yeah, maybe all this uh diversification, the chat and all that stuff, maybe we should do the same thing. So maybe this device thing, uh how much should we spend to get Johnny Ive here? I okay. Here's my thought on this. If you if agentic is the is the thing, and I think it certainly looks like it may be a thing, uh you need an interface. And what open claw and a lot of others do now is you know, you use Telegram or uh Discord or Apple Message or something to talk with it. But what you really want is a much more as a much more convenient way of talking to it. I was thinking I really would like to write some sort of tool that I can use with one of my pins or maybe my Apple Watch that I could just say, hey, Claude, I got an idea, or remind me later to do this. That's the way it should be. And I think that's what they're going to end up doing. It's part of the agen tic. It's the interface to agen tic. But do you need a device to do that? Do you need an unique device to do that? Or I don't know. I don't think you want to take your phone out of your pocket. I think you want uh something ambient, whether it's glasses, earbuds, watch, ring, pendant, you want ambient intelligence. Just same same way you really would what I would really like to do is just shout into the void Well then ambient intelligence belongs to Amazon and their deployed base of ambient devices is second to none. and cannot seem to make a decent A I. Alexa Plus is horrible. Oh yeah it is. Even even the people inside the company don't want to use it. So I maybe maybe the urgency of we are gonna run out of money any minute now is pushing anthropic and open AI faster and they're doing better because of it. Yeah they have to sprint off the line. No the other companies have to sprint. They don't have to. They don't have to sprint. No. But but who won that race? The tortoise or the hair? Oh, the tortoise did. Okay, never mind. Meta didn't buy the malt book for b ots, says TechCrunch. It bought into the agentic web again. That's what they bought. Well Motebook is a social network for AI, so and meta is social right? I think you're right. They bought the hype. No, but I'm trying to get a bar on. Meta didn't have panic. They're in panic there. Meta's the one who knows how to mine that dat a. It's all about the data. There's no data there. That's why I was sad when when Meta bought the limitless pin. Well that's why I'm not sure. Remember I bought the B computer and then Amazon bought him? Then I bought the limitless pin and then Meta bought him . I I you know, if Apple does an ambient I think ambient intelligence. That's the f that's the phrase I'm thinking. Uh well I just like the Le Leo is just walking down the street screaming, Yeah I want a milkshake. Exactly. I drink you up. There's so many of these purchases that feel like panic purchases. Yes, exactly. Where you had to do something with your mone y. Yeah, especially with Meta, right? Meta is is the king of I don't know what we're doing, but write a check. Well, I imagine somebody's run running to Mark's office saying, Oh okay, boss, we can buy this one. And if somebody doesn't come to him before that he's gonna get mad. Uh just uh know about it. He's I know the feeling where you feel like I got a lot of money, I'm gonna buy that stupid computer. all the time, so there's always a privacy concern there, right? Maybe you. I don't mind. Maybe most other people. Maybe we don't do ambient commun But that's a disadvantage. You want to be able Yeah, I mean that's one I mean honestly the the way you do it is you have it you tap something or you it bark. Well you need to you need to trust the third party. Prayer is ambient. You're asking the ultimate intelligence for help. Exactly. Somebody once told me there are only two prayers in the world. Thank you, thank you, thank you, and help me, help me, help me is that fair Robert? I would add one oh my god and that can be taken so many different ways that could be help me help me hel AI agent startup that Meta acquired, the Chinese company that Meta acquired uh last late last ye ar, has uh as of the sixteenth launched a new desktop application called My Computer. Dun dun It's odd. That's in my head now, Leo. I appreciate that. Oh, you are a lucky one. Bringing Manis' agent directly into your personal device through my computer, the agent can read, analyze, and edit local files, launch and control applications, execute multi-step tasks, including coding tasks, without the user having to upload anything to a ser ver. It's local. It's going to compete with perplexity's computer. Branding is not what these guys do well. Not great. Uh and the and the Chinese government is a little actually concerned. Manis is a uh Singapore company, but it runs out of mainland China. This is from the next web. The key architectural difference between MANIS and OpenClause, the model layer beneath the agent. Open clause, open source, can be run with any model, right? Its quality depends on which model you choose. Manaus runs on Meta's own proprietary model sta ck, which the company says is more consistent and capable at the cost of a subscription fe e. But is it local? I don't see how that could be local. That has to call out to the server. Yeah, I mean I me an analyzing what's on your desktop, it's sending it somewhere. Your your your computer doesn't have the power for that. Sending it to men. Exactly. And Anthropic has this, of course, Claude Co-Work, OpenAI created their version of that as well. Everybody's trying to do that . Basically taking the coding platform s, Claude Code and Codex, and making it so that non-coders can use it kind. But I I don't know. I still need to know how much of it some of it goes out. It sounds like they're they're making a good faith effort to keep everything local. Right. But intelligence doesn't work like that, right. Local when it can. OpenAI released two new models uh today, chat GPT five four mini and nano. Oh. You complete me. Mini and nano. So how big is mini? And how big is nano? Uh let's see. Nano's the smallest, cheapest version of 5.4 for task or speed and cost There's a new Buick . There's the benchmarks, which I don't I don't pay too much attention to . Let's see uh the let's see the numbers. Show us the numb ers. Yeah, come on. Right, the first thing, right ? All these benchmarks. Uh in the API, GP T five form mini supports text and image inputs, tool use, function calling, web search, file search, beers, four hundred K context window. That's good. That's bigger, twice as big as Cloudla Cude Coates context window until recently. Seventy five cents per million input, four fifty per million out put. Uh M ini uses only thirty percent of the GPT five four quota. Hmm And you can use many uh subagents, which I do that with Claude. I use Haiku and Sonnet for subagent work with it aren't too Nano, let's see. Mini is available to free and go users via the thinking feature in the plus menu. For other users, Mini is available as a rate limit fallback for GPT-5-4 think ing, nano is only available in the AP I. And nano is 20 cents per million input. Significantly cheaper, yeah. Buck twenty-five per million output. That might actually be a good foundational model for uh like an inference bu ild. Yeah. Because you're you're already limiting the scope. Yeah. Yeah . Uh so um yeah. As well as unclassified work. This is their opportunity to get in the door. Microsoft is now threatening to sue them. Saying, no, you're ours, OpenAI. We can't do a deal with AWS, you're obv.ious Traditionally, Anthropic has owned AWS, right? And that's that was a big advantage for Anthrop ic. But OpenAI has really jumped in the breach. But uh speaking of breach, Microsoft says that's a breach of our contra ct. And they are threatening to su e. So trouble that's a trouble in paradise thing. They were friends. Well, I mean, come on, that's been going on for more than a Yeah Microsoft also gave Apple the money that saved them. So Yeah, hundred and fifty million. Yeah. They're good they're very good at that. Yeah. Yeah. Maxwell Zeph writing and wired inside OpenAI's race to catch up to Clawed Code. This is what you were talking about, Jeff. Kind of a repositioning . Do they still want to do the uh adult chat? There is now controversy within the company. The safety people there are saying, uh, this is really a bad idea. It is a bad idea. And and they haven't uh repudiated it yet. They have to. They have to repudiate it. It's just it's just it's I'm no prude. I'm I'm no Puritan but from a business perspective, it just doesn't make sense to um advertise it. Claude Code uh accounts for fi a fifth of Anthropics business more than two and a half billion dollars in annualized revenue. Codex less than half that. So OpenAI says, wait a minute. We need to we need to get in on that. That's where the money is, is enterprise computing and inference. Mm-hmm . It's the age of inference. It'll last at least a month. I still think it's more of a the age of agentic, but that is an inference. That's one w kind of inference, I guess . Uh the information also had this story. OpenAI, Musk, and Focus. What one of these things is not like the other. Fidji is a very good man Fidji C Mo, who's the CEO of applications, is a very strong manager.. Uh huh And I think that she'll bring sense to this. She was at uh at Meta and then she was at uh the CEO of OpenCart. Uh she's been working for quite some time. She's really smart. She's the one who told the all hands meeting last week at Johnny Ive, has everybody seen Johnny Ive? Is it Johnny Ive? Is it shopping? They wanted to remember they were gonna do ads, they were gonna do shopping, they were gonna do sex chat, sexy chat. Yeah, he was announcing something every day. Ye ah. But the press releases cost money if you actually do what they say. Meanwhile, uh in the same story she talks about Elon Musk, uh another example of a company that's throwing out its models, XAID. He's publicly trashed the state of play at XAI, tweeting XAI was not built right first time round. So we're being we're rebuilding it from the foundations up that followed the departures we reported I mean it probably had something to do with the fact that he kept wanting to put his thumb on the scale every time his AI didn't give him answers he wanted. Yeah. I mean that's a really good way to to bust your training model. Yeah, and I still don't believe that I mean everybody else is making public hires and all this kind of stuff. I don't bel I I've got to believe that Musk cheated. Oh yeah. Some form to make what's there. If he would if he could. Let's let's put it. Here is uh Sam Altman talking at a conference. I think the business of every other model provider is gonna look like selling tokens. You know, they may come from bigger or smaller models, which makes them more or less expensive. They may use more or less reasoning, which also makes them more or less expensive. They may be running all the time in the background trying to help you out. They may run only when you need them if you want to pay less. They may work super hard, you know, spend tens of millions, hundreds of millions, someday billions of dollars on a single problem that's really valuable. But we see a future where intelligence is a utility like electricity or water and people buy it from us um on a meter. On a meter. Metered intelligence. I wish we played this for Rahman. It's it's commodifying the enlighten ment. Right, is commodifying all education, all thought, everything else into some commodity that he's gonna own and sell in a meter. It's just offensive. And this is why they're behind. Yes. Because Anthropic doesn't sell top tokens, they sell services. They sell sell they things that you want. Open AI is still caught up on this idea that they're gonna be the power behind everything, and everyone buys their tokens and then turns them into services. Well, one of those has a future in the enterprise, one of them doesn't. Yes. What did you both think of um Jensen Wong's hint that he's going to compensate employees with uh tokens? I don't think it's just him. I think this is the truth all the rage in Silicon Valley now is you get your pay package and in there is uh and we will give you uh you know twenty thousand tokens a week. Tokens for people who were saying, what are they talking about? Tokens we keep saying that word. It's it's the it's the uh information going in and out of the AI, right? Everything the AI sees is tokens. So if it ingests the works of Shakespeare, the the process of the transformation And turn them into tokens. Yeah, the relationships. And so the tokens are the fundamental, they're the bits of intelligen ce. You know, in a sense of bits and bytes. They're the bits, the smallest unit of intelligence in an AI is a token. And when you're using AI, you are putting tokens in your prompts, uh information it gathers from the web and stuff, and then you're getting the results back as tokens. And they charge you on both sides of that. That's right. So this is just the return of company script, then, right ? It's I think what he's really trying to say is what do you get? It's another day older and he's saying no, I don't think he's saying it's util aity. It's gonna be it's that's how we pay for the internet. That's how pay we pay for water, how we pay for electricity. Yes, but if he's paying his employees in tokens, I mean and they can only spend those tokens on on open AI. Right. Well, no, uh that's not necessarily how it's gonna be. First of all, you'd be foolish because you can't pay the rent in tokens yet. Maybe you will. Oh, just wait. Just wait. You know what? See the monetization of tokens. Robert, what do you think? You you you know about uh uh currencies, cryptocurrencies. Do you think tokens could become the new dollar? Yeah, this is just another private privatization of a financial utility scheme. It's a currency. It's currency. It's currency. Now any currency has the ability to be to be translated, converted into other currencies. So what he's saying is: look, I want to reward my employees, I want to pay my employees in a currency that can increase in value if they put more work into the company. I don't think I honestly think the demand comes from the employees as much as it comes. In other words, if I'm going to go to work for some for one of these companies, I want to know how much intelligence am I going to get? How much use of your product am I gonna get? They get it for building their own companies outside the company? No. No. Well, that would be part of the negotiation. We don't know. Do they get it for their twenty percent? You could be rapacious and say everything that you do with your tokens we own . But remember it's competitive. The job market is extremely competitive for these uh engineers. So the engineer could make a deal and say, look, I want to be able to use well I actually what would ask for is unlimited use. I don't know yeah, if you're employee making why should I have any limit on my use. Yeah, that that that makes no sense. Yeah, and if you're saying it's part of the compensation package, right, then that means he's getting less money also, right? You're getting less money because you're getting the tokens. Not necessarily. It it it really could be to pay without cash and taxes. If I'm negotiating a deal at with Mark, Mark, you're gonna pay me a uh a million dollars a year to come to work for And by the way, I want unlimited A I. I don't know why they don't just give them unlimited. I mean, doesn't right? If you're gonna do work for the company, then they should give you whatever resources you need to do to do that. Well, I think that's Maybe it's for personal sense. Then it's then it's it's an asset that I can use on my own. Yeah, you can go home and build your startu p. There There is right now uh a mystery model on open router. It appeared about a week ago. It's called Hunter Alpha. Everybody's talking about it. People think this is the next deep Seek version . Deep Seek has really been a disruptor in the uh AI world. They came along. It was you know it's funny, it's a a year it was January of last year. It's only been a year and some months. But they changed everything. They showed how reinforcement learning could make an AI much, much bet ter. During tests conducted by Reuters, the Hunter Alpha Chatbot described itself as a Chinese AI model primarily trained in Chinese. It said its training data extended to may twenty twenty five, which is the same knowledge cut off reported by Deep Seek , but the system would not identify the identify the developer. I only know my name, my parameter scale, and my context window lengt h. My name and serial number . Neither deep sync nor open router has identified it. Yeah. Yeah. Trillion it's a trillion parameter monitor model . That's a lot, isn't it, Robert? That's a Yeah, that's a that's a bit more than what I'm running locally. So the local model the biggest local model uh I've seen is 120 billion parameters. 120 B. That's the chap GPT OSS one twenty B. What uh do we know how many parameters, uh Claude ? We throw these terms around and I'm kind of assuming people know what we're talking about . So you you train you put in a bunch of te xt, you you get some tokens that is the representation internally of these text, but by themselves you don't know which tokens are more important or less, that's done with parameters, which also come out of the training. And the parameters change as you do the training, and they also change when you do the reinforcement learning and other post training to make the model smarter. More pr the I don't know if this is a good analogy or not. I will use this analogy and you can correct me if I'm wrong, Robert. I often think of m sampling music. So there's two numbers that matter when you sample music, when you take analog music and turn it into digital, how many slices of the wave you take and how much information each slice has. So for instance you could s you could sample something at 14,400 samples per second, and then each sample is a 16-bit sample, that is CD quality. And I think of parameters as the sample size. So you're sample, you're sampling it this much, but how much information a single param eter stores and then how many paramet I guess si parameters would be the samples, how many samples per second? The number of bits per parameter. The one that I like to use whenever I'm I'm doing a presentation is uh let's say you're trying to train a model and you ask the model what color is the oce an? Well, okay, so it's looking through its its current stack of parameters and it sees that ocean is most associated with fish. So it responds the color of the ocean is fish. Well that's wrong, so you correct it. You say no no no the answer is blue. It's now creating a new parameter so that it it biases itself so that when it sees the tokenization of ocean and color, it leans towards the answer blue. So every time you do that, you're creating a new parameter, and that curr that parameter forms the bias of how the model both understands and replies. Uh but but no, yeah, I see that sampling, that sampling idea. I I like that. I'm gonna work that into my next presentation. It's n it's not perfect, but it's something. Yeah. It's hard to understand this stuff. Uh anyway, unknown whether the uh mystery model Hunter Alpha is actually deep seek. Well, I guess we'll find out at some point. I think Claude is 161.5 million. So yeah, this this is. Is that all? Yeah. Trillions a lot. Trillions a lot. Well look at the Carpathy did. Right. I mean we can build up from there rather than this macho hubris of saying I got the bigger thing, the bigger thing, which I think we're really. for that specific purpose with uh a much smaller base. There's a really interesting branch of this uh research uh where let's say you wanted you wanted to teach an AI how to add numbers . Initially, when you train it, you would give it a bunch of sums. One plus one equals two, one plus two equals three, one plus three equals fo ur. If you have so many parameters that the AI is capable of storing all of the data. Yeah . So many tokens, maybe it's tokens, so much data that you could store all the data, then what you will get is a lookup t able. Well we're going back to soccer. Which will break as you're you're memorizing rather than thinking. Yeah, it'll br exactly. You'll break as soon it's brittle because as soon as you get outside of the training data it doesn't know because it's just l doing a lookup table. What they've found, interestingly, training these models is by reducing the number of parameters, you can induce the model to think, to solve it, not by a lookup table, but actually to uh to in come up and we don't know what As they they go into the the inference models of LLMs. And that is that it's not just about your parameter count. Yes, it's important to have enough parameters to be able to do the work you want it to do. But the quality of the parameters is something that we don't yet measure and we need to figure out how to do it because you can have a one trillion parameter model that is absolute trash. And you can have a one hundred million parameter model that works beautifully. And it it's all about how those parameters have have interacted with one another. And back to the notion of specialized machines is the the training data focused on something like healt h versus anything that teaches it how to speak? Engl ish That's a very good question. Chinese is a much more complicated language than English. It's a very different language. I mean people Chinese people think differently because their language is different. Like I wonder how much of a different style of the character And also what the general also what the general public feeling sentiment of AI is in China, I'm also curious about that. Well, I could tell you they're going crazy over OpenClaw. Have you seen pictures of OpenClaw conferences in China and they're all wearing lobster hats? And what? OpenClaw is the latest fed in China. OpenClaw has groupies? They have I fact if I can find one of those lobster hats, I'm getting one. Um, I can find it for them. Baidu has integrated open claw into its uh Shaudu services to work as a voice as voice controlled remotes. Oh, that sounds like something I might have been talking about earlier. To here's a picture uh of an open claw conference or actually this is baidu's headquarters with a giant lobster out front. The open claw lobster out front. Um already. Yeah. They have open claw smart speakers that you can talk to a voice controlled remote for the AI agent. I had that idea. I should have patented it . Is the hat for the lobster in the chat the one you want? Yeah because a Chinese company is gonna honor your patent, Leo. Oh yeah, that's right. Doesn't really matter. What I what I patent. Ye ah. Uh yeah, yeah, that's the hat. That's the hat. Okay. Well that's one of 'em. They uh they were all wearing 'em. I saw I saw pictures of big conferences in China where people were wearing lobster hats. That's a crab though. You think that's a crab. Yeah. Oh wow. Do you think I could get a sponsorship from OpenClaw if I could get uh Pope Leo to wear that hat? Yeah. It looks a bit like a skull. Yeah, why not? Here are attendees with their laptops uh at Baidu's open claw lobster market event on Beijing yesterda y. This is great. I'm so excited about this. I love it. I mean we don't have it happening in San Francisco. There's all sorts of stuff. There are open claw meetups. Are you kidding? Attendees play games at Baidu's Open Claw Lobster Market. See there's a There are open claw meetups in San Francisco, Leo? Oh yeah . You didn't know about that? Uh I've been over here for a while. I know. Oh my god, yes. Peter Steinberger is like uh ACDC. He like he pref he shows up and oh, it's like rock and roll, man. Okay. Well, I gotta go back to California now. Open claw. It's very hot. Very, very hot. By the way, Darren Darren and Chat just gave us all lobster hats, just FYI. Oh. Also Karen's very quick on the draw. Also, Burke found your lobster hats on Amaz on. Yeah, that's pretty good. And this is a two-pack, so I'll get one for you and one for me. One for the We're going down for you. Oh yeah. My Claude should have its own hat. Yeah. Absolutely. This is another kind of lobster hat. I like uh I like the one you're wearing, Jeff. It's got beady little eyes looking straight at me. Oh very nice, Darren. Thank you. Uh let's take a bre ak . We did the boom. Let's do the doom and when we come back. The boom and the doom and the glo om. We're talking AI with uh intelligent machines, Father Robert Balliser, the digital Jesuit. Do you d I mean, are are you the go-to guy at the Vatican for AI? Everything here is done with multiple teams and very large committee of of people who are very good at what they do. Before we got on the air we were talking decastries. Yes. And what's that? Ruman quite like quite likes that word now. She's gonna use it in her. What's a dicastry? Dicastri it's it's our way of saying department. It's a fancy word for department. Oh. Okay . Um anyway, it's great to have you and and your uh wonderful. Sh short. And thank you for staying up so late. Oh I didn't think of it. It's after midnight, isn't it actually you you got me at a good time because we're in that three week windows where the United States does daylight savings before us. So it's only eight hours right now. You're gonna get very busy too. Uh we're in the middle of Lent. Yeah. Do you Do you uh do do priests give up things for Lent? We do. We do. Do you want to share what you gave up? I gave up sex. No, I actually I gave up soda. I gave soda for Lent. Yeah. That's a good thing to give up. It is a very good thing. And the the funny thing is, I always feel so much better every time I I give up soda. I know. And then within like three weeks, I go, ah, I just I want another soda. Big soda's got you in its claws. It does. It does big uh it's very appealing. You know, when I was a k id, it was a big deal when we had we would get my dad 'cause he really wasn't a very good cook. He'd bring home chicken licken. He called it he called it pizza chicken night. He'd bring home a pizza chicken. A pizza, chicken licking, and a bunch of uh Cok e. And for some reason in my mind, Coca-Cola and pizza and Coca-Cola and uh fried chicken, they just go toget her. And it g you get programmed, don't you? I wish I had never had soda because that that burst of sugar, it just it doesn't flip your brain. It's it's basically heroin. I used to I used to drink six a day. Yeah. And then when I got atrial fibrillation with nine eleven, I couldn't have caffeine. And so I gave up coke entirely and I managed to do it. But congratulations. Six a day. Were they were they sugared or diet? Oh yeah. Yeah. Oh I hated the diet. I'd wake up in the morning and the first thing I'd have is a coke. The bubbles wake you up, it's wonderful. Yeah. And all carbon. All of our parents also use it as like a reward. So it to in our heads it's a reward. It's a reward. But I remember that reward at McDonald's and the cup of Coke was like this big. And now it's like this big. Yes. So And that's the small . That's the small. It's America for you . Also, Father, uh along with Father Robert, we've got uh Jeff Jarvis, Professor Jeff Jarvis. Uh are you a doctor? I didn't ask. No God, no. No PhD. No, no, no. I I don't have a master's. I've I've created three masters degrees and I'm working on a creator four. Master and I haven't had one myself. Oh don't. I was with a bunch of academics, there's a wonderful academic named Andrew Pettigree, who's a book I'm about to read, and I was at St Andrews in Scotland and I was with him and a bunch of his academic colleagues and I said I started three bachelor's degrees and they all looked at me like well why didn't no I don't mean that I I created them. But I yeah I'm too dumb. And a whole cl oth. Do you uh let's see. Uh yeah, we'll take a break. We have a f we have a few more stories and we have some picks . You're watching Intelligent Machines brought to you this week by Zscaler, the world's largest cloud security platform. It's pretty clear the potential rewards of AI are far too great for any business to ignore, but it's also clear the risks are as well loss of sensitive data, attacks against enterprise managed AI, and of course generative AI increases the opportunities for the threat actors, the bad gu ys, helping them rapidly create fishing lures that are so good you're bound to cli ck. They're using it to write malicious code. We have some examples on that on SecurityNow last week. And they use it to even do things like automate data extraction. Hey, you're using it. Why wouldn' t they? It really is a a problem with proprietary data being leaked. There were 1.3 million instances of social security numbers leaked to AI applications. ChatGPT and Microsoft co-pilot saw nearly 3.2 million data violations last year. You gotta do something about it. Fortunately, there is a solution. It's time for a modern approach with Z scalers, Zero Trust Plus AI. It removes your attack surface, it secures your data everywhere, it safeguards your use of public and private AI, and it protects you against ransomware and AI powered phishing attacks. Don't take my word for it. Listen to what Siva, the director of security and infrastructure at Zwora, says about using Zscaler. AI provides tremendous opportunities, but it also brings tremendous security concerns when it comes to data privacy and data security. The benefit of Zscalet with ZIA rolled out for us right now is giving us the insights of how our employees are using various GenII tools. So ability to monitor the activity, make sure that what we consider confidential and sensitive information according to you know company's data classification does not get fed into the public LLM models, etc . Thank you, Siva. With Zero Trust Plus AI, you can thrive in the AI era, you can stay ahead of the competition, you can remain resilient even as threats and risks evolve. Learn more at zscaler dot com slash security zscaler.com slash security. And we thank him so much for supporting the show. Talking about AI risks, this was an appalling story. We've talked before about how face recognition is so problematic . But you would hope that police departments wouldn't rely entirely upon face recognition to apprehend suspects. Well, unfortunately, the Fargo North Dakota Police Department did . They had video of a fraudster walking into a North Dakota bank passing a bum check or somet hing. They uh fed it to uh a database of face recognition and the name Angela Lips came up. A woman who lives in North Central Tennessee, not North Dako ta. They uh they called the police department in Tennessee said, Can you arrest her? They did. They put her in jail. She sat in jail five for four months without bail, waiting for extrad ition. She was extradited to Fargo, North Dako She said, I've never been to North Dakota. In fact, I've never been on an airplane until they flew me to North Dakota to face charg es. Charged with four counts of unauthorized use of personal identifying information, four counts of the ft . The Fargo police, when they found out that she had a perfectly good alibi. They never bothered to check, I guess. Uh she could prove she was in Tennessee when this video was taken in Fargo, North Dakota, released her on Christmas E ve. And uh didn't give her any money h ome. Didn't didn't did stranded her. It's a new episode of the show Fargo. They stranded her. Local defense attorneys covered a hotel room and food on Christmas Eve and Christmas Day, a local nonprofit helped return her to her home. She's back home, but she says while she was jailed, she couldn't pay bills, so she lost her house. She also said no one from the Fargo police department has apologized. I hope to God some attorney has come to her and said, yeah, we can get some money out of this . Come on, lawyers, get pro bono on this. I mean, seriously, twelve hundred miles away from home, lost her life. Everything that she had built up gone because a couple of people decided that they're gonna trust a tool that they didn't really understand. There's zero accountability, zero responsibility for using the tool in the first place. I I this is I mean this should be science fiction dystopia. It this should not be something that we're just accepting. I mean the fact that this has not done just wall to wall press coverage is ridiculous. Terrible. And the fault is human. Yeah. Right? Um don't say oh the tool screwed on the right humans screwed up. Y ep. That's Roman saying McKinsey paid a uh pen tester to hack it. Uh and it worked. Uh McKinsey, uh the world's uh one of the world's best known consulting firms, built an internal AI platform called Lilly for its employees. It had chat, document analysis, RAG, over decades of proprietary research, AI-powered sear ch. So they said we decided to point our autonomous offensive agent at it . Didn't give it any credentials, didn't give it any insider knowledge, no human in the loop, just a domain name and a dream, McKinsey writes. Within two hours the agent had full read and write access to the entire production datab ase . You know, it's fortunately it was their own uh red teaming of their uh of their system . The agent mapped the attack surface, found the API documentation publicly expos ed. Over two hundred endpoints fully documented. Most required authentication, but twenty-two didn't . I don't need to go on. Oh my favorite part of that story, Leo, is that uh the way that they since the API was public, all they needed were the JSON keys. And the JSON keys were in the the error logs of the database. So they were just able to use some SQL injection, get the error logs, boom, you're in . That's fantastic. So I actually really like AI Doom. That's is that's a good that's good news because the AI found it now, I'm sure they fixed it. And this is one thing we're really starting to see AI being used in security audits uh very effec tively. Uh a new study uh says using AI leads to brain fry Sigh . Uh the the article uh from Harvard Business Review quotes our friend Steve Yeegi say ing uh I had a palpable sense of stress watching gas town. It was moving too fast for me. I know the fe eling. Um yeah, so don't let your brain fry using AI. You know what? Touch grass. We all got to touch grass a couple of days ago when Claude was down for like five hours . We were all sitting here and was doing a show. I guess it was it yesterday? It felt like ages. We were all sitting here doing a show and it uh uh Darren or somebody said it, hey, uh Claude says it's overloaded. I said, what? And I tried it. It was Nobody could get into Claude. You should see it on Reddit. People say, oh man, I had to go outside. Where's my friend? My friend's g one. You know, I think I actually had an example of a brain fry. I I was uh helping a colleague uh different part of the world who she was extremely upset because she had been using a couple of AI tools to help with her content production, her brain fry was that she was so depressed that the work that the AI tools created was in her estimation better than the work she had been creating that would be horrible, wouldn't it? Right. And so she was trapped in this job where she was just she was basically just putting queries into AI and she had given up trying to get any of her own style into it. And that's that's definitely brain fry. That's kind of the reverse centaur in a way, right? You're the AI's now doing the good part and you're doing the unplug nasty part. Yeah. I mean that's sort of that's really bad imposter syndrome. Like feeling that your work isn't good enough. Oh that's like a really bad imposter syndrome. I I imagine if you have imposter syndrome and an AI confirms that you're not good enough. Yes. But that's but that's a subjective call, right? Like you who knows like what's better, right? Yeah. Ye ah. Objectively I've read her all of her work and know she's better. She really is better. Okay . Uh Mike Masnik writes about it uh in yesterday or day before yesterday in Tech Dirt. Uh the the sad case of a California state appellate court case in which a hallucinated citation traveled through an entire legal proceeding from a Reddit blog post to a client's declaration to an attorney's letter to the opponent's attorney opposing attorney's draft of the court order to the judge, the judge's signature to appellate filings at no point along the way did anyone bother to check whether the case actually existe d. Um it's a it's a story about believe it or not, custody of a dog. Two people dissolving their domestic partnership each wanted custody, uh shared custody and visitation of the dog Kira. You take Fido and I'll take Claude . In uh the case, one of the plaintiffs cited two cases, the marriage of Twig and the marriage of Tea Garden, neither case exists . They came from a Reddit blog post by Sassafras Patterdale . Uh uh Munoz uh and her attorney did not actually realize the case was fictitious. They attached the Reddit article to an exhibit in the declaration . Sassafras was identified as a blogger, a podcaster, and animal rescuer. Well, you know, there you go. It was cited as a watershed California Supreme Court case that never happen ed. But everybody bought it and it went all the way through the court. The a judge signed it. It went to the appellate court. They didn't question it I mean this is one of those fields that is most vulnerable to AI hallucinations because so much of the legal profession is knowing citations and knowing precedents. So and and most of the time when you write these briefs with these precedents in them, it it sounds like an AI hallucination, even if it's not. Because it's just citation and then a small quote and then citation and a small quote. So I could understand why someone reading one of these briefs would first not check on the citations because there's so many of them and second not really understand that the wording is is different because it's not. Well, and Mike points out that each step of the way the fake citation got more legit. Yeah. Right? It started as a blog post, but then it's it's in the pleadings and then it's the judge's court order. And so each step of the way it got more and more legit. If the judge is receiving it, he's assuming that his clerk and the attorneys who looked at it before already checked it. Exactly. This is the problem right here. Nobody checks the AI's work. Like literally nobody checks the AI's work. And that's the real problem. So what we need is an L L M that checks the hallucinations. That that will fix everything. And uh my good friends Kevin Rose partnered up. Remember Kevin had a thing called Dig back in the day. It was actually he started up uh right after uh tech TV. Uh got him on the cover of uh Business Week as the sixty million dollar man. It was before Reddit. Reddit came along. Alexis O'Haney and Steve Huffman founded Reddit uh kind of as a clone, frankly, of Dig. Dig eventually uh fell to um the bots who were gaming its algorithm and uh after dig four uh they kinda shut it down. Well, fast forward a little bit, Alexis O'Hanian, who's done pretty well for himself, partnering up with Kevin, to revive Dig and to revive the Dig Nation podcast. Dig came out of beta just a couple of months ago. Yeah, hardly at all. Immediately the bots were back. It has shut down again. After two months. I know. I'm not laughing. I'm not laughing. They said we thought, you know, we were going to use AI. We thought we could really solve this problem. Uh Diggs CEO Justin Mazell wrote, writes in a note pinned to the homepage of uh Dig dot com We faced an unprecedented bot problem. We kne w we knew that bots would be out there and would be a problem. We just didn't know we didn't appreciate the scale, sophistication, or speed at which they'd find us. We banned tens of thousands of accounts. We deployed eternal tooling and industry standard external vendors. None of it was en ough. It's not just a dig problem, it's an internet problem, but Rebo ot. They they got bought it aga in. I think I've told the story on the show in the past. My old boss, Steve Newhouse, now the chairman of Advanced Publications in County Nast, loved dig, wanted to buy it. There was no buying it. So he bought his second choice, which was Reddit. Smart move. Y ep. You might be interested in this. Uh I imagine you go in for an ECG every once in a while, Mr. Jeff Jarvis. I also own my little thing. Ye ah. Uh Cedar Sinai has an AI system that can read echocardiograms and write the re port. I know you'd like a cardiologist to all validate it. So I just had the case where I had an MRI my back after I injured it, right? And and because the pain was so god awful and the hospital spine doctor we were looking for what's the cause of my infection? And the hospital spine doctor said, Well, it's not the spine, and so it's not my problem, okay? It's over to you, infectious disease doctor. Bye, nice to meet you, Jeff. Boom, gone. But then I got another spine doctor, and he did another MRI, and he looked at it and he said, uh no. And and the and the and the radiologist read the MRI said no infection. He said, No, there's an infection there. That's why you feel so bad. And that's why we have to keep treating you on antibiotics for the next th two months, more than two months. Um same data, same eyes, different eyes, but uh different perspectiv es. Using the AI c uh you know uh uh complementarily and a complementar complementar No, no. Well I think it's what we've learned from the previous stories. Echo Prime was trained on more than twelve million echocardiov cardiography videos paired with cardiologists' written interpretations. Uh it's done very well. State of the art performance on twenty three diverse benchmarks of cardiac structure and function, outperforming uh well I don't know if it's outperforming doctors. I don't see it's designed to assist clinicians. I guess that's important, not replace them. It perver produces a verbal summary cordi cardiologists can review and act on So that's okay, right ? As long as the doctor looks like the doctor's looking at it and challenges the doc the doctor, fine. Yeah, I like that challenge. I am not gonna let a robot do surgery on me, but a surgeon in London says he's performed the UK's first long distance robotic operation on a patient located fifteen hundred miles a way. He carried out a prostate rem oval Yeah. Via Robot. Robotic urological. I I already know. I'm there folks almost the OR and looked up at this tall, this thing that was taller than me and I saluted it. Gosh. Did it operate on you? Yeah. Yeah. I mean the the surgeon was there at controls, but he was four feet away from me. Well that's the thing. He doesn't have to be next to you unless I guess maybe something goes wrong. Here is uh the the surgeon in But there must be latency in that control, right? Like that isn't that can't be real time. Yeah, I mean, I mean in the middle of a prostate surgery, I don't want to hear the phrase, oh, he's got to reboot his router. Yeah. That's no. The patient can we can we let a robot operate on you said it's a no-brainer, which is probably not the best phrase to use when you're getting operated on by a robot. But I guess if there's a shortage of doctors, this could be a good idea Yeah, if if you're a if you're a if you have a specialty and someone can't get to you 'cause they're fifteen hundred miles away from the nearest specialist. Exactly. Yes. It's better than absolutely nothing for sure. Yeah. Yes. Yeah. If that's if that's the scale. To trade more doctors around the world, uh better first solution. Last stor y. Travis Kalanick is back . Oh good. The founder of Uber, he wrote a very interesting post on his news site, Adams.co, said I never left. He was fired, of course, uh by the board. He says it was just a you know investor taking advantage of me because my mom had just died. My dad was seriously injured. He doesn't name it. He blames Bill Gurley. Oh good. We can ask Bill about this. After being booted from Uber, he uh Uber incidentally at the time rem,ember he uh he brought in Anthony Lewandowski. Uh uh Travis's whole vision for Uber was really the way Uber makes money is with self-driving vehicles like Waymo, not with drivers. Uh ultimately it's got to be autonomous vehicles if it's going to make any money. But as soon as he was booted, they sold off the uh self driving portion of the comp any. Uh Kalinick went and started a uh uh cooking pop-up called Cloud Kitchens, uh, which turned out to be kind of a real estate play. And now he's put out a manifesto in which he says, Really, all I've ever been interested in is automating the means of produ ction. He says everything ultimately has to be grown, mined, manufactured, and then transported. And so what his new business is growing, mining, transp ort. He says at Adams, we make, and this is the key, gainfully employed robots . Specialized robots with productive jobs that bring abundance to their owners and society at large . And uh don't worry about losing your job because we're gonna need lots of people initially . It says we're enabling humans to be radically self-reliant. If you're lucky. If you're lucky you got a van. Otherwise just you in the river. Mm-hmm. Uh what if, he says, you had an industrial kitchen and needed to make a thousand pancakes an hour? I couldn't think of a worse way to do it than a hum an. A specialized machine that makes pancake batter at a large scale with a heated iron apparatus that could cook 100 pancakes at a time to golden brown perfection. No awkward robotic arm flipping pancakes, instead, precision cooking ultra speed and throughput efficient use of space designed for the machine. This is where special Yes, and who's buying those pancakes now that no one has a job? Okay. How many people are trying to make a pancake industry? I mean We know Craig Mark Rom Craig Newmark loves the pancake robot robot. He's got the money to do it. Yeah. Well he actually just flies last week. Yeah, he just flies around. Automated pancakes. We had a pancake robot in the uh in the brick ho use. You did? We did. We did it on the new screensavers. But I thought you had no, you had a you had a different breadmaker machine. Oh yeah, we had a uh Indian uh uh chapati maker and actually somebody one of our employees had it took it at home as a home I can't remember who has it. Somebody has it. It wasn't they weren't very good. You were gonna me and Stacy the Brad and you never did. Oh yeah, I was gonna send it to you in a FedEx envelope. Back in the days when you had money. I missed the days of the Leo box, the mystery boxes that would show up every once in a while, and they'd be like, ooh. It was a roti maker. Thank you. Roti, that's right. It was a roti maker. And uh somebody has it. It's still uh it was still in the wor ld. Oh wait a minute. No, this was it. Printing pancakes with pancake bot. Pancake. Yes, there you go. Oh you had it. Oh you really did. Okay. Look at all these pancakes. You're right, Robert. Look at there's the twit pancake. Oh, you're right. We had a pancake bot. Oh, I remember the tech. Oh, it made me. That is as good a portrait as Bill took. You can kind of see some features in there though. You got you know, there's some nose and mouth. I want to eat this so bad. I really do. Um so this is the pancake. They tasted joining pretty good. I mean it's with a pancake. Yeah. I I should send this to Craig Newmark. Yeah. Here's the guy who who invented the pancake bot in our in that little screen. There's Megan Moroni. Pancake Bot Cre ator. You're doing this and I'm fearing the screensavers is going to take us down from YouTube. No, no, this is our screensavers. I know, I know, I know, I'm joking. Uh wouldn't that be funny though, if the old screensavers uh took it down? Or something like that. This looks like a 3D printer. It's just a 3D printer with pancake. It's 3D printers. With batter. 3D printer with batter. He made it out of the colour worked really well. Cleaning it was a pain. I remember that. Yeah. It always is. And as usual, well not as usual, sometimes because uh Jeff and I are old men, we read the obituaries every morning and thank goodness we're not in them. I should mention that Jorgen Habermas has passed. Mm-hmm. And uh and many people will know that uh Jeff refers to Habermas whenever he wants you to take a sh ot. What? Well Guten Gutenberg and Habermas. So So uh the only besides you, the only person I If you go to my blog or my medium feed, uh I put up uh a section from the Gutenberg parenthesis about Hoppermas and coffee houses. So tell us about uh he was, by the way, uh 96, so he had a good long life as philosopher. He created the notion of the public sphere, the bourgeois public sphere in a book that was very influential. It took years before it got translated into English, which was interesting, there was a delayed effect. And he argued that in the salons and coffee houses of England and France there was a reasoned civil public discourse. We should keep on going back to that. And my research for the Gutenberg parenthesis, still on sale now on paperback. Um you know what I found was that that the coffee houses were not so civil. Uh it was it was a it was a wrong, it was a it was a it was a trying to it was uh almost a conservative view that we try to recreate and reconnect But were they group places for conversation? Well there's but very much so. And and what impressed him was in in a country that was fully in England that was fully class based, uh anyone could sit anywhere and was expected to do so. It broke down class barriers. But there were also fist fights. There were also arguments, right? Well anywhere people gather there are fist fights. And this is really the beginning of public discourse in important ways. There were there were publications, the Tattler and the Spectator, and they would listen to what was happening in the coffee houses and that would appear in the publication. The publication would come back in and feed the conversation in the coffee house and it was this cycle of public discourse. It was a fascinating thing to discover to to to study and he was uh very much very provocative and right in lots of ways, but uh m many disagreed with him. The other problem was that he called it inclusive. Well, it only included those people who could afford to go and buy coffee and sit there all day. It didn't include women. It didn't include people of uh whose skin was not white. And so it wasn't as inclusive as he thought. Uh so there's an argument from the feminist perspective and from a race perspective there were arguments about this. Nonetheless, give Habermas credit, even though his prose, as I said in in one of my earlier books, was as hard to digest as a cold German sausage. Really hard to read. The translators often give up and just put the German words in parentheses if they don't know how to translate it. I'll tell you how important the coffee houses were, as you point out in the Gutenberg parenthesis. Uh King Charles eventually issued a proclamation for the suppression of Yes. A the source of fake news. Yeah. very important in Jesuit formation. It's it's one of the yeah, absolutely. It's one of the philosophers that we very much push in our early formation because critical theory, this idea that all social constructs uh develop everything from truth to knowledge to class develop from the relationship, the power dynamic between the dominant and the oppressed groups. That's that's a very, very important and usable concept throughout both philosophy and theology . Man, you Jesuits are smart. No, seriously. My dad uh went to a Jesuit high school, uh Regis, and a Jesuit college Fordham. And he always called the Jesuits uh uh God's Mar ines. I don't know what that means, but that's actually that's that's true. Pi um so on July fourth, I'm taking my final vows here in Rome. Are you that's like our our last step. Congratulations. That's wonderful. It's a very drawn out process. You've been going through this literally for decades. Thirty-two years. Wow, Robert. Wow. Are you unusually slow or is this normal that it would take that long? Um no. I am slow because I've been jumped around so much from like the DC, Hawaii. Finally we got to the place where Father General, who I live with here, he just said, No, we're just gonna do it. Let's do it now. Let's do it. But there's a lot you have to do. I mean, PhD. I mean you there's a lot you have to do to get it. But this conventional vowel. The then the fourth vow is special obedience to the Pope, and that's where that God's marine things come from, because the Pope can actually say, I need someone here and we've taken the vow saying, Okay, I'll go. Doesn't matter what I'm doing. I'll pack up and I'll go. Is that the gang sign, by the way? Is the the sign of the four? Fourth vow, baby. Fourth vow. Hey, I did not know that you that's because we I've been w watching this progress for at least ten years. I had no idea. Wow. And and I know there was a you know you had to do these retreats. There was a lot you had to do. Oh yeah. Uh congratulations. Thank you. That's such great news. Is this is that how is it goes? Is there five? Is there five or si x? Five or what? Another final. Four is it. So you're asking this is the most Catholic you can ever be. This is it. So technically after I f I do this, I'm no longer in formation. So I'm a fully formed Jesuit. So it only took thirty two. Wow. What's the ceremony? What's the what happens? Uh so here in the big chapel that we have in our house, our the Borgia chapel, which is this is our mother house, um I will profess my my vows again before Father General and then we do a not a secret, but it's a solemn ceremony in the back with just Jesuits where I will take a bunch of promises and then the fourth vow. Oh, that's great. Do you get a lobster hat to wear? I should ask about that. No, I'm it shouldn't be irreverent. That is so I'm so happy for you. That's fantastic. Thanks. Um is there any s insignia or sign that you can wear cash marks on your sleeve or no, it's epics. There used to be, that's actually where this comes from. This caused a lot of hurt because it used to be when you got to this point, you were judged. And if you did not meet the standards, you would not get the fourth vowel, you would get only three vows. And but my generation has has really turned that around. We don't see that. That's that's not an extra bonus. That's that's not status that you have four vows. It just means that the work you do allows you. Correct, correct. There's no I love that. Isn't that great? The feather and the heart. It's the book of the dead. Yeah. Yeah, that's the Egyptian old Egyptian way. Mm-hmm . Uh speaking of uh great announcements, once again, let's reiterate, today we uh Jeff brought us uh the first scoop. It's now on the blog. Your new book series, Intelligence, AI, and humanity begins. And it begins with our guest, Ramon Chowder y. Um, this is going to be for Bloomberg uh Academic. How many books will there be? Three to five a year. A year. This is been a process to get this far, but I'm delighted this is a this is a big deal we're here and these three authors are signed up Matthew Kirstenbaum and uh Charlotte McIlwain and Roman Chowdery. Uh it's a great beginning and um so they don't come out until early next year, which is what happens in books. But um uh I'm looking for people to come to me and ask you know topics questions like what is education? What does learning mean now? Uh what is creativity? Uh what is consciousness? Um uh those kinds of topics. Uh uh I I I want to look uh Father Robert at this notion of of the hubris of man thinking he creates Ubermens ch and what does it mean to to put yourself in the position of thinking that you're god like? Uh what are the the the theological implications of AI? Um there's lots of things that I maybe could could I get uh the Holy Father to maybe write a book for it? Um probably actually. You witnessed it here first. That would be um Mark Twain's publishing house. Uh one of the things that actually took it down, was uh the very excited that he thought that everyone in the every Catholic in the world would buy a uh uh uh biography of I think it was the prior Leo, I believe. Oh, yeah. Uh and it didn't sell as quite as well as they hoped . I i and the look the the the work that the Pope would be putting out would be an encyclical or an official letter. So they tend to be kind of dry and very technical. Yeah. Yeah . You wouldn't want Leo. You would want one of the cardinals. Or no, even better, one of the just the priests who are working on the commission, because their stories would be far more interesting. He called me one day and he said, uh think about a book series about AI. You want to edit it? Hell yes. And what excites me about this is it's not a book about the technology. It's about society on the technology. Much more technologies. back. And I think the opportunity here, it forces us, is the covenant conversation with Riemann. It forces us to reinvest, reimagine many topics about our life and society. So that's what's exciting. And it's what we like to do on uh this show too. Exactly. Not not just talk about the the the technical details. Well that's great. Congratulations. Thank you for the opportunity that I'm proud to have announced here. Yeah. Can we can we please get it done with Rubon's on next week? Oh that's good. Well it was nice that we could help you with s some leverage. Thank you. Yes. Uh let's uh wind this up as we always do with uh picks of the week. Uh normally we'd start with Paris. I don't know if you uh Father Robert, if you've got anything uh in in mind that you might want to promote or talk about? Not for myself uh that I can talk about. I will say that I am so, so happy with what I've seen uh in the the film version of uh Project Tale Mary. Oh really? Ah. S seriously. I mean I The reviews are positive. Yeah. I did not know how they were gonna turn the Martian into a a decent film because I loved the book. Uh but they did it. And I think they've done it again with Project Hail Mary. It's it's funny the the guy who uh wrote the script said he was very he thought I can there's no way I can write this book and make a movie out of it. But the reviews have been very positive. I have tickets to see it Thursday. Lisa and I are gonna go see it Thursday. Uh very excited about it. I will have to wait till they get back to the States in uh in April. But uh release schedules, yeah. Mm-hmm. We will return with uh our picks of the week. Congratulations to both of you. It's kinda fun to work with such prestigious fellers. Father Robert Balliser, the digital Jesuits soon to be a m ember of the Club of the Four, the Sign of the Four. Um, Mr. Professor Jeff Jarvis. And don't forget, Hot Type is still coming before the new books come out. Hot type is just around the corner in August. This episode of Intelligent Machines brought to you by Mudulate every day. Enterprises generate millions of minutes of voice traffic. I mean we're talking customer calls, agent conversations, fraud attempts, right? Most of that audio is still treated, you know, basically like text, flattened into transcripts, stripped of tone, intent, and most importantly of risk. Well modulate exists to change that. Modulate started in gaming. By supporting major players like Call of Duty and Grand Theft Auto. As you might imagine, these massively multiplayer games have a lot of audio. Players talking to each other. Modulate helped these companies separate playful banner from intentional harm at scale, not easy to do, by the way. Today, modulate helps enterprises, including Fortune 500 companies, understand twenty million minutes of voice every day by interpreting what was said and what it actually means in the real world. This capability is powered by Modulate's newest Elm ELM, Velma two point oh. VELMA is a voice native, we're just talking about specialized models. It's a voice native, behavior-aware model built to understand real conversations, not just transcripts. It orchestrates 100 plus specialized models Benchmarks, beating all the large foundation models in accuracy, cost, and speed. It's number one in conversation understanding, number one in transcription accuracy and cost, number one in deep fake detection. That's huge. And number one in emotion detection, that's hard. Built on 21 billion minutes of audio. Velma is a hundred times faster, cheaper, and more accurate than LLMs at understanding speech. That includes Google's Gemini, OpenAI, XAI. Most LLMs are black boxes. Velma doesn't just assess a conversation as a whole, conversation in transcript out, but it breaks it down for greater accuracy and transparency by producing time-stamped scores and events tied to moments in the conversation, meaning you can see exactly when risk rises, when behavior shifts, when intent changes. With Velma, you can improve your customer experiences, reduce risks like fraud and harassment, detect rogue agents, and more. Go beyond transcripts and see what a voice native AI model can really do. Go to modulate's live ungated preview of Velma at preview.modulate.ai. That's preview.modulate.ai to see why Velma ranks number one on leading benchmarks for conversation understanding, deep fake detection, and emotion detection. That's Velma at preview dot modulate dot AI We thank Velma so much for supporting intelligent machines. Father Robert recommended we all go to the movies, which I'm going to be doing. I'm very excited about seeing it. I was you know, just like you, I had some trepidation. Yeah. It's a complicated book, but they they from the clips I've seen, they got the tone right. They got the playful tone, the the amazement tone. And uh Gosling actually might be the right actor for that. Yeah. I I would that I was so we had Andy Weir on, and he uh uh had just learned that Ryan Gosling was going to play the role that the brothers were gonna direct it. And uh I was I honestly I was a little I like Ryan Gosling really, but actually the more I think about it, the more I could see how he could play that kind of nebushy kind of um uh you know well I don't want to give away anything. Yeah, exactly. Character. Funny point though. I had him on Triangulation right before uh right after it was announced that Matt Damon was going to play play the character in the movie. Right. And you had him right after Project Hale Mary. So th his two books that got turned into movies, he was he was on twit. We have oh, we've interviewed him for every book he did. Yeah. Uh and I hope to interview him when the movie comes out. I'll we'll try to get him. And he's a great guy and I think uh I think pretty well disposed towards the network. So I mean Ryan Gosling is also a good actor. So it's like he's he's not just a pretty boy. He's a good actor. Okay. Fair enough. I am gonna take if you say so I believe you. No, you know what? I was spoiled by La La Land, I'll be honest. I am going to uh take a paragra ph from Jeff Jarvis' s uh Gutenberg parenthesis and put it into my word. To the written word, I say. So my pick of the week, actually I have several, but I'll start with this one is Kagi's translator. Kagi's translator is really good. I am a Kagi fan. We had Kagi's CEO on a couple of months ago. Uh Kagi does a variety of languages, you know, Chinese, English, all the usual, but they also have fun languages, corporate jargon, doth rocky, Elvish, emoji speak, Gen Z, High Valerian, Klingon, but I thought we should see if we could turn Jeff' s academic passage into linked speech. It also has Middle English, Navi and Pirate Speak. Might be better in pirates. Not that one, no, no, no, please. No no pirates speak. Okay, well I'm gonna uh well too late. Haver must be thinking too highly and not only of the scurvy dogs which frequented the coffee houses, but their parlay as well. Hey, that's pretty good. Building his tail on the belief that their bickrin be rational and critical. How about LinkedIn speak? I don't even know what the Oh, it gives it bullet points with uh with uh emojis. And it gives it tags, thought leadership, networking, habermas, community building, public speech. Yeah, look at that. I wonder how it is in Dothra ki. Hab is Kaifah. Worth in the German. Emoji speak. Have you ever written your books in emoji? I actually uh hot type. I'm very proud to say has an emoji in it. Oh very, nice. As it should, if it's hot type. What about Gen Z ? Sure. Cowan called him out for We should send that to Paris and Paris. I was just thinking that. See if it resonates. Anyway, I this is a lot of fun. There's also there's also Reddit speak, which uh I don't know. So Habermass basically idealized the hell out of coffeehouse culture. This is pretty good. Isn't it good? It's good. Leo, Jeff Jeff and I were talking about this when you went for a bite because uh I I showed him one uh the LinkedIn speak. So the English was I Oh you'd already done this fraud. No, no, no. Let me tell you the example. So what did you use? I have been arrested for fraud. What did it say? It said, I'm thrilled to announce that I'm starting a new chapter. I've recently begin given the the unique opportunity to step back and reflect on my professional journey from a high security environment. Finally I'll get to write that book. Wow. That is pretty awesome. So thank you, Kagi, for doing something pretty uh pretty great. Uh and then one other site I'll show you because we've been talking a lot about local models. I have a uh a really good little program called LLM Fit. It's an open source program. You can find it on GitHub that you can run on your machine to see if you can run an AI locally. But maybe this would be easier. It's called canIRun.ai. You can tell it what machine you have. Oh. And what you know graphics capability and so forth. So let's say you've got uh one of those brand new uh M5 Max uh computers with how much RAM, let's say 64 gigs of RAM, and you can see which models will run best uh on that hardware. These are the local models. So this is very handy. Mistral small. Mistral Small. You can only run Mistral Small on that beauty little m girly machine of yours. Uh so anyway this is you can choose it w for code. Uh you could choose providers, you could choose licenses, s you can you know, uh choose what your standard would be, what uh what how you would sort it and so forth. I think this is very nicely done. It's can I run dot AI? And then I have actually used uh one that is uh on GitHub, it's an open source tool called LLM Fit, uh which you can also download it and run and it works quite well. Same idea. Uh although it takes a lot longer because it's actually gonna you know, work on your machine. Um and it's a two-e, which I, as you know, quite fond of. Uh Jeff, your pick of the week. So let's see. We could we could uh have Schadenfreud over BuzzFeed, but I won't during bankruptcy and all that, I won't do that. Instead, uh we have the Washington Post tried a white castle from an airport vending machine. Oh I thought you did it. No, I didn't do it. Well I'm gonna get to my personal in a second. Oh, okay. So and it was bleak, says the post. Now of course it also points out that there's no White Castles in Boston, so they don't know how bleak a White Castle is normally. See, I would imagine I mean in White Castle, the whole key to the White Castle is piping hot driping dripping grease, which is soaking up into the bun. That's and the bun flavored crystals. Steam gushy part to it. So this is a vending machine, a terminal A at Logan, there's a California pizza kitchen and uh the men's room. Great. Good thing it's close by, yes. At least it's nearby. Wow. I mean, Spain has vending machines that sell ham. So that I mean how they bet it go. But I bet it's good. I mean, ham probably does all right in a vending machine. Probably. Yeah. I'm thinking. Obscure. So in the spirit of this, uh after all the attention in the last week or so for the CEO of McDonald's eating the big arch with no uh enthusiasm. I decided 'cause I have to have more iron. I decided that I would sacrifice for the show and my and and and and is there a picture of you. I didn't disgusting. I went in and and I bought a big arch I ate less than half of it and it was the butt blood was okay. But they put so much special sauce on it. It's two two quarter pound patties, three slices of cheese. Oh, that's too much. That's a hard one. Lettuce and this and and and and gr and grizzled onions and and the sauce. Well the sauce is such when you try to buy the reason the CO had to be cautious is because when you bite on it, the patties start slipping out . It was disgusting. It was big mess. So saved you a uh ten bucks, folks. Ten bucks. Yeah. Go instead and spend thirty-four dollars and get a French dip sandwich at Salt Hanks in New York. Yeah, exactly. It'd be much better. Ah. Wait a minute. Did they deliver? How come it was free? No, so the McDonald's there there is a McDonald's on Vatican property, just right next to St. Peter's. The reason why they allowed there to be a McDonald's on Vatican property is because that McDonald's agreed to give away X number of meals to the homeless every day. Awwww. And so I was there towards the end of the night and they said, Well, father, would you would you like would you like this? What'd you think? Uh I don't think they should be feeding that to the homeless. No. Well said. I worked uh when I was a kid in high school, I worked at a McDonald's, and uh McDonald's uh you know, it' verys uh tightly controlled inventory. They don't want the uh employees to be eating the food and so forth. So they but they also very careful about when a hamburger's been sitting in the bin too long, they don't they don't want to s sell it. So they have what's a white bit plastic bin called the waste bin, and when a hamburger has exceeded its uh time limit in the bin, they throw it into the waste bin and at the end of the day you count the waste. Yep. So make sure that, you know, everything is accounted for which, I suppose is a good inventory practice. Uh, but we thought, geez, it's such a waste. Maybe we could donate uh this to the local uh dog pound. You know, the shelter. Oh, yeah. We'd be nice. The dogs would like it. The shelter turned it down. They said there's not enough protein. Yeah. We don't want it. No, we don't want your instinct. We don't use the same nuggets and burgers that they use in the United States because they're not classified as food. They won't let them come into the EU. Yeah. You mean pink goo is not food? Yeah. By the way, I worked at McDonald's as well when I was a kid. Did you? Yep. The one at uh mission uh mission hills in Fremont County. Jeff worked at Ponderosa Steak. Ponderosa steakhouse. We had to count you had these little tiny white cups for the sour cream that could charge you too much for your for your you need five of them to make potatoes. We had to count every little cup. Wow. Well I'm just saying thank God for the Ozempic because otherwise I'd be craving a big Mac right about now. But just if you see the big arch is just so over the it's just so American. It's over the topic is over. I I mean I was hooked at McDonald's for a long time. I was from working there and eating so much of it. Yeah. Uh it's full of sugar. It's just like your Coca-Cola's. It really was always my hangover cure, because it was it was like rice to a Chinese person. It was it was American. Yeah, Taco Bell. Well that's true. Although it wasn't true for a while. They had to really kind of They just introduced the three dollar value meal, which includes their uh the sausage McMuffin and an orange juice or something. It's so yeah. They're trying to make it affordable again. God bless them. You know they need to because well one of my wife's students works at McDonald's. She teaches ES L, and her cut hours have been cut back because prices have gone too high. I I was actually very grateful that I uh my first job was McDonald's. I really learned how to work. Mm-hmm. You know, they say don't you're never standing still. You're always if you if you don't have something to do cle an. You know, always always be working. And uh today went by a lot faster because of it. Or or what I would do, which is sabotage the shake machine. That was kind of my that was my job. Well this was very early on. We had shake machines but we didn't McFlurries yet, so Father Robert, so nice to see you. Congratulations on your ascension. Is it called that? No, it's just final vows. Ascension sounds like I'm tur converting into something I'm very happy for you. That's that's such wonderful news. Um and I hope that we get to see you soon uh maybe even in the uh Bay Area, but at least uh on our microphones here uh for the podcast we'd love having you on. Father Robert Ballas of the Digital Jesuit, Padre S. J. on Blue Sky , and all the other platforms. And of course, uh the Jesuit pilgrimage app on iOS and Android. It' as great way to follow uh Father Loyola's pilgrimage across the world. Across the world. Jeff Jarvis, congratulations are due too. Congratulations on the new Jeff's book Hot Type is available for pre-order. You can also get the Gutenberg Parenthesis now in paperback and magazine, a wonderful read. And he will be back next week with Ms. Paris Martineau for another thrilling gripping edition of Intelligent Mach We do the show every Wednesday uh right after uh Windows Weekly, two PM Pacific, five P.M. Eastern, twenty one hundred UTC. You can watch us live in the club to Discord. Thank you, club members, for making that all possible, actually for making everything possible. Without the club members, I don't know what we would do. If you haven't joined yet, twit.tv slash club twit, please join the join the club. Uh you can watch us live. Everybody can watch us live uh during the show production on YouTube, Twitch, X dot com, Facebook, LinkedIn, and Kick after the fact. Shows end up at twit.tv slash IM uh or on YouTube. There's an intelligent machine channel there for the video. Great way to share little clips with friends and family. Spread the word, spread the goodness. And of course you can subscribe on your favorite podcast client and get it automatically the minute it's done. Thank you everybody for joining us. We'll see you next time. Hey everybody, uh Leo Laporte here, and uh I'm gonna bug you one more time to join Club Twit. If you're not already a member, I wanna encourage you to support what we do here at TWIT. You know, twenty-five percent of our operating costs comes from membership in the club. That's a huge portion and it's growing all the time. Uh that means we can do more, we can have more fun. You get a lot of benefits, add-free versions of all the shows, you get access to the club to Discord and special programming. Like the keynotes from Apple and Google and Microsoft and others that we don't stream otherwise in public . Please join the club. If you haven't done it yet, we'd love to have you. Find out more at twit.tv slash club twit and thank you so much
This excerpt was generated by Pod-telligence
Listen to Intelligent Machines (Audio) in Podtastic
Podcast Listening Magic
All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.