HO

How to Fix the Internet

Electronic Frontier Foundation (EFF)

Future Roles in an Automated World

From Separating AI Hope from AI HypeAug 13, 2025

Excerpt from How to Fix the Internet

Separating AI Hope from AI HypeAug 13, 2025 — starts at 0:00

People who believe that super intelligence is coming very quickly tend to think of most tasks that we wanna do in the real world as being analogous to chess. where it was the case that initially chess bots were not very good. At some point they reached human parity and then very quickly simply by improving the hardware and then later on by improving the algorithms, they're vastly, vastly superhuman. We don't think most tasks are like that. This is true. When you talk about tasks that are integrated into real world, you know, require common sense, require a kind of understanding of a fuzzy task description, it's not even clean you've done well and when you've not done well. We think that human performance is not limited by our biology. It's limited by our state of knowledge of the world, for instance. So the reason we're not better doctors is not because we're not computing fast enough. It's just that medical research has only given us so much knowledge about how the human body works and you know how how drugs work and so forth. That's one. And the other is you've just hit the ceiling of performance. The reason people are not necessarily better writers is that it's not even clear what it means to be a better writer. It's not as if there's gonna be a magic piece of text, you know, that's gonna persuade you of something that you never wanted to believe, for instance, right? We don't think that sort of thing is even possible. And so those are two reasons why in the vast majority of tasks we think AI is not going to become better, or at least much better, uh than human professionals. Arvin Narayanin explaining why AIs cannot simply replace humans for most of what we do. I'm Cindy Cohen, the Executive Director of the Electronic Frontier Foundation. And I'm Jason Kelly, EFF's activism director. This is our podcast series, How to Fix the Internet. On this show, we try to get away from the dystopian tech doomsayers and offer space to envision a more hopeful and positive digital future that we can all work towards. And our guest today is one of the most level headed and reassuring voices in tech. Arvin Nurayanin is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He's also the co author of a terrific newsletter called AI Snake Oil, which has also become a book. Where he and his colleague Sayesh Kapoor debunk the hype around AI and offer a clear-eyed view of both its risks and its benefits. also a self-described techno-optimist, but he means that in a very particular way. So we started off with what that term means to him. I think there are multiple kinds of techno optimism. There's the Mark Andreessen kind where You know, let the tech companies do what they want to do and everything will work out. I'm not that kind of techno optimist. My kind of techno optimism. is all about the belief that we actually need folks to think about what could go wrong and get ahead of that so that we can then realize what our positive future is. So For me, it you know, AI can be a profoundly empowering and liberating technology. In fact, going back to my own childhood, this is a story That I tell sometimes. I was growing up in India and Frankly, the education system kind of sucks. My geography teacher thought India was in the southern hemisphere. That's a true story. Um, and you know, there weren't uh any great libraries nearby. And so a lot of what I knew I not only had to teach myself, but it was Hard to access reliable, good sources of information. We had had a lot of books, of course, but I remember when my parents saved up for a whole year. And bought me a computer. that had a CD ROM encyclopedia on it, that was a completely life changing moment for me, right? So that was the first time I could get close to this idea of having all information at our fingertips. That was even before I kind of had Uh Internet access even. So that was a A very powerful moment. And I saw that Uh, as a lesson in Information technology. having the ability to level the playing field across different countries. And that was part of why I decided to get into computer science. Of course, I later realized that My worldview was a little bit oversimplified. Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way. And so that led to my research interest in the societal aspects of technology as opposed to more of the tech itself. Anyway, that's all of that is a long winded way of saying I see a lot of that same potential in AI that existed in The way that Internet access It you know, if um Done right um has the potential and and has been bringing a kind of liberatory potential. to so many in the world who might not have the same kinds of access that we do here in the Western world with our institutions and so forth. So let's drill down a second on this because I really love this image. You know, I was a little girl growing up in Iowa and seeing the internet made me feel the same way. Like I could have access to all the same information that people who were in the big cities and had the fancy schools could have access to. So you know, from all around the world there's this experience. And depending on how old you are, uh it may be that you discovered Wikipedia as opposed to a CD Rama of an encyclopedia, but it's that same moment. And I think that that is the promise that we have to hang on to. So what would an educational world look like, you know, if you're a student or a teacher if we getting AI right? So let me start with my own experience. I kinda actually use AI a lot in the way that I learn new topics. This is something I was surprised to find myself doing given the well known limitations of uh these chatbots and accuracy but it turned out that there are relatively easy ways to work around those limitations. Yeah, one kind of example of the user adaptation to it is to always be in a critical mode where you know that out of ten things that AI is telling one is probably going to be wrong. And so being in that sceptical frame of mind actually in my view enhances learning and that's the right frame of mind to be. Yeah, uh anytime you're learning anything, I think. So that's one kind of adaptation, but there are also technology adaptations, right? The the simplest example If you ask AI to be in Socratic mode, for instance, in a conversation, uh a chat bot will take on a much more appropriate role for helping the user learn as opposed to one where students might ask for answers to homework questions and you know, end up taking shortcuts and it actually limits their critical thinking and their ability to learn and grow, right? So that's uh one simple example to make the point that a lot of this is not about AI itself, but how we use AI. More broadly, in terms of a vision for how integrating this into the education system could look like. I do think there is a lot of promise in Personalization. Again, this has been uh target of a lot of overselling, that AI can be a personalized tutor to every individual. And I think there was a science fiction story that was intended as a warning sign, but a lot of people in the AI industry have taken as a as a manual or a vision for what this should look like. But Even in my experiences with my own kids, right, they're Five and three. Even little things like You know, I was uh talking About fractions the other day. And I wanted to help her visualized fractions and I asked Claude to make a little game that would help do that. And within, you know, it was thirty seconds or a minute or whatever, it made a little game where it would generate a random fraction, like three over five, and then Ask the child to move a slider and then it's a Well divide the line segment into five parts. Highlight three, show close the child's dead to the correct answer and you know give feedback and that sort of thing. And you can kind of instantly create that. So this convinces me that there is, in fact, a lot of potential. in AI and personalization. If a particular child is struggling with a particular thing, a teacher can create an app on the spot and have the child play with it for 10 minutes and then throw it away and never have to use it again. can actually be meaningfully helpful. This kind of AI and education conversation is really close to my heart because I have a a good friend who runs a school And as soon as AI sort of burst onto the scene, he was so excited for exactly the reasons you're talking about, but at the same time A lot of schools immediately put in place sort of like chat GPT bands and things like that. And We talked a little bit. on EFF Steplink's blog about how, you know, that's probably an over An overstep in terms of like people need to know how to use this, whether they're students or not. They need to understand The capabilities are so they can have the sort of uses of it that are adapting to them rather than just sort of like immediately trying to do their homework. So Do you think schools, you know, given the way you see it, are positioned to to the point you're describing? I mean that seems like a pretty far future where a lot of teachers know how AI works or school systems understand it. Like how do we actually do the thing you're describing? Because most teachers are overwhelmed as it is. Exactly. That's the root of the problem. I think there need to be You know, structural changes, there need to be more funding. And I think there also needs to be more of an awareness so that there's less of this kind of adversarial approach. Uh yeah I think about, you know, the levers for change where I can play a little part. I can't change the school funding situation, but just as one simple example. I think The way that researchers are looking at this maybe right right now today as not the most helpful. And can be reframed in a way that is much more actionable to to teachers and others. So there's a lot of studies that look at what is the impact of AI in the classroom that to me are the equivalent of It's eating food good for you. Addressing the question at the wrong level of abstraction. Yeah I mean you can't answer the question at that high level because you haven't specified any of the details that actually matter. Whether food is good and entirely depends on what food it is. And if you're if the way you studied that was to go into the grocery store and sample the first fifteen items that you saw, you're measuring properties of your arbitrary sample instead of the underlying phenomena that you want to study. And so I think researchers have to drill down much deeper into what does AI for education actually look like. Right. If you a a ask the question at the level of are chatbots helping or hurting students. you're gonna end up with nonsensical answers. So I think the research can change and then other structural changes need to happen. I heard you on a on a podcast talk about AI as and and saying kind of a similar point, which is that, you know, what if we were deciding whether vehicles were good or bad, right? Nobody would um everyone could understand that that's way too broad a characterization for a general purpose kind of device. to come to any reasonable conclusion. So you have to look at the difference between, you know car, a taxi, other ki you know, all the or you know, various other kinds of vehicles in order to do that. And I think you do a good job of that in your book, at least in kind of starting to give us some categories and And the one that I'm uh, you know, we're most focused on at E F F is the difference between predictive Technologies. and other kinds of AI because I think like you we have identified these kind of predictive technologies as being kind of the most dangerous ones we see right now in actual use. Uh am I right about that? That's our view in the book, yes, in terms of the kinds of AI that has the biggest consequences in people's lives. And also where the consequences are. very often quite harmful. So this is AI in the criminal justice system, for instance, used to predict Who might fail to show up to court or who might commit a crime and then kind of prejudge them on that basis, right? And deny them their freedom on the basis of something they're predicted to do in the future, which in turn is based on the behavior of other similar defendants in the past. Right. So there are two questions here, a technical and a moral one. The technical is How accurate can you get? And It turns out when we review the evidence not very accurate. There's a long section in our book at the end of which we conclude that one legitimate way to look at it is that all that these the systems are predicting is the more prior arrests you have, the more likely you are to be arrested in the future. So that's the technical aspect. And that's because, you know, it's just not known. Who is going to commit a crime. Uh yes, some crimes are premeditated, but a lot of the others are spurn of the moment or depend on things, random things that might happen in the future. And something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow belief in the future. is actually accurately predictable. The other piece that I've seen you talk about and others talk about is that the only data you have is what the cops actually do. And that doesn't tell you about crime. It tells you about what the cops do. So so my my friends the human rights data analysis group ta called it predict the police rather than predicting policing. And we we know there's a big difference between the crime that the cops respond to And the general crime so Gonna look like people who commit crimes are the people who always commit crimes when it's just a subset that the police are able to focus on and we know there's a lot of bias baked into that as well. So it's not just inside the data, it's outs in terms of these prediction algorithms and what they're capturing and what they're not. Is that fair? That's totally. Yeah. That's exactly right. And more broadly, you know, beyond the criminal justice system. These predictive algorithms are also used in hiring, for instance. And you know, it's not uh the same uh morally problematic kind of views where you're denying someone their freedom, but a lot of the same pitfalls apply, I think Uh, one way in which we try to capture this in the book Is that AI snake oil Or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They're not able to manually go through all of them. So they want to try to automate the process. But that's not actually addressing what is broken about the system. And when they're doing that. The applicants are also using AI to increase the number of positions they can apply to and so it's only Escalating the arms race, right? I think the reason this is broken is that we fundamentally don't have good ways of knowing who's going to be a good fit for which position. And so You know, by by pretending that we can predict it with AI, we're just elevating this elaborate random number generator. Uh if into uh this moral arbiter. And uh, you know, there can be moral consequences of this as well. Uh like Obviously, you know, someone who deserved a job might be uh denied that job. But uh it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way. So in our view, the only way to get away from this is to make necessary organizational reforms to these broken processes. Uh, just as one example in software, for instance, many companies Well offer people. students, especially internships, and use that to have a more in depth assessment of a candidate. I'm not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI. One of the themes that you bring up in the newsletter and the book is AI evaluation. Let's say you have one of these companies with the hiring tool. Why is it so hard to evaluate the sort of like effectiveness of these AI models or the data behind them? Yeah, I know that it can be, you know, difficult if you don't have access to it, but even if you do How do we figure out the shortcomings that these tools actually have? There are a few big limitations here. Let's say we put aside the data access question. The company itself wants to figure out How accurate these decisions are. So to know Yeah. Um yeah, they often exactly they often don't want to know, but even if you do want to know that. In terms of the technical aspect of uh evaluating this, it's really the same problem as the medical system has in figuring out whether a drug works or not. And we know how hard that is. That actually requires a randomized controlled trial. It actually requires experimenting on people, which in turn introduces its own ethical quandaries. So you need oversight for the ethics of it, but then you have to recruit Hundreds, sometimes thousands of people followed them for a period Of several years. And figure out whether The treatment group for which you either, you know, gave the drug, or in the hiring case, you implemented your algorithm has a different outcome on average from the control group for whom you either gave a placebo or in the hiring case, you used uh the traditional uh hiring procedure. Right. So that's actually what it takes. And You know, there's there's just no incentive in most companies to do this because obviously they don't value knowledge for their own sake. And the ROI is just not worth it. The effort that they're gonna put into this kind of evaluation. It's not going to allow them to capture the value out of it. It brings knowledge to the public, to society at large. So what do we do here, right? So usually in cases like this The government is supposed to step in and use public funding to do this kind of research. But I think we're pretty far from having A cultural understanding that this is the sort of thing that's necessary. And just like the medical community has gotten used to doing this, we need to do this whenever we care about the outcomes, right? Whether it's criminal justice, hiring, wherever it is. I think that'll take a while in our book. tries to be a very small first step towards changing public perception. That this is not something you can somehow automate using AI. These are actually experiments on people. uh they're gonna be very hard to do. Let's take a quick moment to thank our sponsor. How to fix the internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. Enriching people's lives through a keener appreciation of our increasingly technological world. And portraying the complex humanity of scientists, engineers, and mathematicians. We also want to thank our EFF donors. You're the reason that we exist. And EFF has been fighting for digital rights now for 35 years, and that fight is bigger than ever. So please, if you like what we do. go to EFF.org slash pod to donate. And also, if you can't make it in person to this year's EFF Awards, where we celebrate the people working toward the better digital future that we all care so much about, you can watch the whole event at EFF.org slash awards. We also wanted to share that our friend Corey Doctoro has a new podcast. Have a listen to this. How did the internet go from this? You could actually find what you were looking for right away down to this. I feel like I'm in hell. Spoiler alert. It was not an accident. I'm Cory Doctorow. Host of Who Broke the Internet from CBC's Understood. In this four part series, I'm gonna tell you why the internet sucks now. whose fault it is, and my plan to fix it. Find who broke the internet on whatever terrible app you get your podcasts. And now back to our conversation with Arvin Narayan. So let's go to the other end of the AI world, the people who, you know, are I I think they call it AI safety, where they're really focused on the, you know, robots are gonna kill us all, kind of concerns. 'Cause that's a that's a piece of this story as well. And I'd love to hear your take on Doomloop. version of AI. Sure. Yes, there's a whole chapter in the book where we talk about concerns around catastrophic risk from future more powerful AI systems and we have Also elaborated A lot of those in a new paper we released called AI as Normal Technology, if uh if folks are uh interested in looking that up. And look, I mean, I'm glad that folks are studying AI safety and the kinds of unusual, let's say, kinds of risks that might arise in the future that are not necessarily direct extrapolations of the risks that we have currently. But where we object to These arguments is the claim that we have enough knowledge and evidence of those risks being so urgent and serious that we have to Serious uh policy measures in place now. you know, such as uh curbing open weights AI, for instance, because you never know who's gonna download these systems and what they're gonna do with them. So we have a few reasons why we think those kinds of really strong arguments are going too far. One reason is that the kinds of interventions that we will need if we want to control this at the level of the technology as opposed to the use and deployment of the technology. Those kind of non proliferation measures as we call them. Or In our view, almost guaranteed not to work and to even try. To enforce that, you're kind of inexorably led to the idea of building a world authoritarian government. Th that can monitor all, you know, AI development everywhere and make sure that the companies, the few companies that are going to be licensed to do this. are doing it in a way that builds in All of the uh safety measures, the alignment measures, as this community calls them. that we want out of these AI models. Because Models that took, you know, hundreds of millions of dollars to build just a few years ago. can now be built using a cluster of enthusiasts' machines in a basement, right? And If we imagine that these safety risks are Tied to the capability level of these models. Which is an assumption that a lot of people have in order to call for these Strong Policy measures. Um then the the predictions that came out of that line of thinking, in my view, have already repeatedly been falsified. So when GPT two was built, right, this was back in twenty nineteen. Open AI claimed that that was so dangerous in terms of misinformation being out there that it was going to have potentially deleterious impacts on democracy, that they couldn't release it on an open weights basis. That's a model that my students now build just to, you know. uh yeah, in an afternoon just to learn the process of building models, right? So that's how cheap that has gotten. Six years later. And vastly more powerful models than GPT two have now been made available openly. And when you look at the impact on AI generated misinformation, we did a study. We looked at the wired database of the use of AI in election related activities worldwide. And those fears associated with AI generated misinformation have simply not come true because it turns out that the purpose of election misinformation is not to convince someone of the other tribe, if you will, who is skeptical, but just to give fodder for your own tribe so that they will, you know, continue to support whatever it is you're pushing for. And for that purpose, it doesn't have to be that convincing or that deceptive. It just has to be cheapfakes, as it's called. It's the kind of thing that anyone can do. You know, in ten minutes with Photoshop. Even with the availability of sophisticated AI image generators, a lot of the AI misinformation we're seeing are these kinds of cheap fakes that don't even require that kind of sophistication to produce, right? So a lot of these supposed harms really have the wrong theory in mind of how powerful technology will lead to potentially harmful societal impacts? Another great one is in cybersecurity, which you know As you know, I worked in for many years before. I started working in AI. And if the concern is that AI is going to find software vulnerabilities and exploit them and exploit critical infrastructure whatever better than humans can. I mean, we crossed that threshold a decade or two ago. Automated methods like fuzzing have long been used to find new cyber vulnerabilities. But it turns out that it has actually helped defenders over attackers. Because Saffra companies can and do, and this is, you know, w really almost the first line of defense. use these automated vulnerability discovery methods to find vulnerabilities and fix those vulnerabilities in their own software. before even putting it out there where attackers can have a chance to find those vulnerabilities. To summarize all of that A lot of the fears are based on kind of incorrect theory of the interaction between technology and society. We have Other ways to defend in fact, in a lot of ways, AI itself is is the defense against some of these AI enabled threats we're talking about. And thirdly, the defenses that involve trying to control AI. are not going to work and they are in our view pretty dangerous for democracy. a little bit about the AI as normal technology because I think this is a world that we're headed into that you've been thinking about a little more. Because we're you know, we're not going back. Anybody who hangs out with people who write computer code knows that using these systems to write competo code is like normal now. Tell me a little bit about this version of AI as normal technology, 'cause I think it it feels like the future now, but actually I think depending on you know, what do they say the future is here, it's just not evenly distributed. Like it is not evenly distributed yet. So what what does it look like? Yeah, so a big part of the paper It takes seriously the prospect of Cognitive automation using AI. that AI will and be able to do, you know, with some level of accuracy and reliability. most of the cognitive tasks that are valuable in uh today's economy at least. And asks, how quickly this happen? What are the effects going to be? So a lot of people who think this will happen think that it's gonna happen this decade and A lot of this brings a lot of fear to people and a a lot of very short term thinking. Uh but our paper looks at it in a very different way. So first of all We think that even if this kind of cognitive automation is achieved. To use an analogy to the industrial revolution. Where a lot of physical tasks became automated. It didn't mean that human labor was superfluous. Because we don't take powerful physical machines like cranes or whatever. And allowed them to operate unsupervised. Right. So uh with those physical tasks that became automated. The meaning of what labor is is now all about the supervision of those physical machines that are vastly more physic than humans. So we think and that this is just an analogy, but we have a lot of reasoning in the paper for why we think this will be the case. What jobs might mean in a future with cognitive automation. It's primarily around the supervision of AI systems. And so for us that's a very positive view. We think that for the most part that will still be fulfilling jobs. In certain sectors there might be catastrophic impacts, but it's not that across the board. You're gonna have drop in replacements for human workers that are gonna make human jobs obsolete. We don't really see that happening. And we also don't see this happening in the space of a few years. We talk a lot about what are the various sources of inertia that are built into the adoption of any new technology, especially general purpose technology like electricity. We talk about again another historic analogy where factories how to replace their steam boilers in a useful way with electricity, not because it was technically hard, but because it required organizational innovations, like changing the whole layout of factories. Around the concept of the assembly line. So we think through what some of those changes might have to be when it comes to the use of AI. And we, you know, we say that we have uh a few decades to to make this transition and that Even when we do make the transition, it's not going to be as scary as a lot of people seem to think. Let's say we're living in the future, the Arvind future, where we've gotten all these AI questions right. Uh, what does it look like for, you know, the average person or somebody um doing a job? Sure. A few big things. I want to use the internet as an analogy here. Twenty, thirty years ago we used to kind of log on to the internet, do a task and then log off. But now The internet is simply the medium through which all knowledge work happens. Right. So we think that if we get this right in the future, AI is going to be the medium through which knowledge work happens. It's kind of there in the background and automatically doing stuff that we need done without us necessarily having to go to an AI application and ask it something and then bring the result back. There's this famous definition of AI. that AI is whatever hasn't been done yet. So what that means is that when A technology is new and it's not working that well and its effects are double edged, that's when we're more likely to call it AI. But eventually it starts working reliably and it kind of fades into the background. And we take it for granted as part of our digital or physical environment. And we think that that's gonna happen. With Generative AI to a large degree. It's just gonna be invisible making all knowledge work a lot better. And human work will be primarily about exercising judgment over the AI work that's happening pervasively as opposed to humans being the ones doing the nuts and bolts of the thinking in in any particular occupation. I think another one is uh I hope that we will have gotten better at recognizing the things that are intrinsically human. And putting more human effort into them, that we will have freed up more human time and effort for those things that matter. Some folks, for instance, are saying Oh, let's Let's automate government and replace it with the chat bot. Uh, you know, we point out that that's missing the point of democracy, which is to you know, it's if a chatbot is making decisions, it might be more efficient in some sense, but it's not in any way reflecting the will of the people. So whatever people's concerns are with government being inefficient automation is not going to be the answer. We can think about structural reforms and we s certainly should, you know, maybe it will free up more human time to do the things that are intrinsically human and really matter, such as how do we govern ourselves and and so forth. And um I would uh go back to the the very thing we started from, which is AI and education. I do think there's Orders of magnitude more human potential. open up and AI is not a magic bullet here. Not technology on on the whole is only one small part of it. But I think as we more generally become wealthier and we have You know, lots of different reforms. Hopefully One of those reforms is going to be schools and education systems. being much better funded, being able to operate much more effectively. And you know, a a every child's one day being able to perform as well as the highest achieving children today. And there's there's just an enormous range. And so being able to improve human potential to me is the most exciting thing. Thank you so much, Arvin. Thank you, Jason and Cindy. This has been really, really fun. really appreciate Arvind's hopeful and correct idea that actually what most of us do all day isn't really reducible to something a machine can replace. That, you know, real life just isn't like a game of chess or the test you have to pass to be a lawyer or things like that. And that There's a huge gap between the actual job and the thing that the AI can replicate. Yeah. And he's really thinking a lot about how the debates AI in general are framed at this really high level, which seems incorrect, right? I mean it's it's sort of like asking if food is good for you, or if or our vehicles good for you, but he's much more nuanced, you know? Uh AI is good in some cases, not good in others. And his his big takeaway for me was that. You know, people need to be skeptical about how they use it. They need to be skeptical about the information it gives them and They need to sort of learn what methods they can use to make AI work. with you and for you and and how to make it work for the application you're using it for. It's not something you can just apply. You know, wholesale across anything, which which makes perfect sense, right? I mean No one, I think, thinks that, but I think industries are plugging AI into everything or calling it AI anyway, and he's very critical of that, which I think Is is good and and most people are too, but it's happening. So it's good to hear someone who's really thinking about it this way. point out why that's incorrect. Yeah, I I think that's right. Um I like the idea of normalizing AI and thinking about it as a general purpose tool that might be good for some things and and is bad for others. Honestly the same way computers are. Computers are good for some things and bad for others. So, you know, we talk about vehicles and food and the conversation, but I actually think you could talk about it for, you know computing more broadly. I also liked his response to the Doomers, you know? um pointing out that a lot of the harms that people are claiming will in the world uh kind of have the wrong theory in mind about how a powerful technology will lead to bad societal impact. I you know, he's not saying that it won't, but he's pointing out that, you know, in cybersecurity, for example, you know, some of the AI methods, which have been around for a while, he talked about fuzzing, but there are others. You know, th those techniques, while they were, you know, bad for old cybersecurity, actually have greater protections in cybersecurity. It the lesson is one we learn all the time in in security especially, like the cat and mouse game is just gonna continue. And anybody who thinks they've checkmated either on the good side or the bad side is is probably wrong and and And that I think is an important insight so that, you know, we don't get too excited about. possibilities of AI, but we also don't go all the way to the the Doomer side. Yeah, you know, the the the normal technology thing was really helpful for me, right? It's it's something that Like you said, with computers. I mean, this is just a it's a tool that that has applications in some cases and not others. And people thinking, you know, I don't know if anyone thought when the internet was developed that this was going to end the world or save it. I guess people thought some people might have thought Either or, but you know, neither is true, right? And you know, it's it's been many years now and we're still learning how to make the internet useful. And I think it'll be a long time before we've necessarily figure out how AI can be useful, but there's a lot of lessons we can take away from the growth of the internet about how to apply AI. You know, my dishwasher I I don't think needs to have Wi Fi. I don't think it needs to have AI either. I'll probably end up buying one that has to have those things because that's the way the market goes. But It seems like these are things we can learn from the way we've sort of figured out where the applications are for these different general purpose technologies in the past is just something we can continue to figure out for AI. Yeah, and honestly it points to competition and user control, right? I mean the reason I think a lot of people are feeling stuck with AI is because we don't have an open market for systems where you can decide, I don't want AI in my dishwasher or I don't want surveillance in my television. And that's a market problem. And one of these things that he said a lot is that, you know, just add AI. doesn't solve problems with broken institutions. And I think it circles back to the fact that we don't have a functional it, we don't have real consumer choice right now. And so that's why some of the fears about AI, it's not just consumers, I mean worker choice, other things as well. Um, that it's the problems in those systems and the way power works in those systems, if you if you just center this on the tech. You're kind of missing the bigger picture and also the things that we might need to do. to address it. I wanted to circle back to what you said about the internet because of course it reminds me of Barla's declaration on the independents of cyberspace which you know, has been interpreted by a lot of people as saying that the Internet would magically make everything better and, you know, b Barlow told me directly, like, you know, what he said was that projecting a positive version of the online world and speaking as if it was inevitable. He was trying to bring it about. Right. And I think this might be another area where We do need to bring about a better future and we need to posit a better future, but we also have to be clear eyed about the the risks and whether we're headed in the right direction or not, despite what we hope for. And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit EFF.org slash podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch, and just see what's happening in digital rights this week and every week. Our theme music is by Nat Keith of Beatmower with Reef Mathis, and How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelly. and I'm Cindy Cohen. This podcast is Licensed Creative Commons Attribution 4.0 International and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators. Drops of H2O, the filtered water treatment by J. Lang. Additional music, theme remixes, and sound design by Gay Tan Harris.

This excerpt was generated by Pod-telligence

Listen to How to Fix the Internet in Podtastic

Podcast Listening Magic

All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.