14

1440 Explores

1440 Media

The Future of AI and Humanity

From Inside the ChatGPT Black BoxNov 20, 2025

Excerpt from 1440 Explores

Inside the ChatGPT Black BoxNov 20, 2025 — starts at 0:00

A quick note before we begin. This episode dives into a field that is changing rapidly. It represents the state of the industry as of this year, but check out join fourteen forty.com for the latest updates. But at some point in the past few years, you've tried artificial intelligence. Maybe you asked something from ChatGPT. Google fed you an AI overview. Or you chatted with an automated customer support system on a website. Each of these is a form of AI called a large language model. Or LLM. At its core, AI is any technology that lets computers do things that in the past only humans could. Learning, problem solving, recognizing patterns. When it works well, it can feel like sorcery. It might even feel eerily human. These LLMs can seem like they're truly thinking. And they have personality. But what's really going on when you interact with one of these LLMs? How does it work? And why did even the experts? People who have spent decades studying AI. Not see this coming. Answer that. We need to talk to someone who's been thinking about AI for close to 4 years. Someone who's built the tools that many AI systems today run on. We need someone like computer scientist and physicist Stephen Wolfram. I'd kind of alternated between doing science and technology for the last uh very long time now. Steven created Mathematica, a software used around the world to solve complex math and science problems. He also built Ulfram Alpha. A tool that provides instant answers to complicated questions. Years before AI chatbots came along. So today, with Steven's help. We're going to take you on a journey. First, we'll learn how LLMs actually work, step by step. Then we'll learn the history. And finally limitations and problems. What LMs can't do. And why that matters. I'm Sony Cassom and this is 1440 Explorers. We're on a mission to uncover the essential knowledge that explains your world. Stay with us. You should tell the people who we are and what our new show is. I'm Robert Smith, and this is Jacob Goldstein, and we used to host a show called Planet Money. And now we're back making this new podcast about the best ideas and people and businesses in history. And some of the worst people. Horrible ideas and destructive companies in the history of business. We struggled to come up with a name, decided to call Business history. You know why? Does this show about the history of business? Available. Everywhere. I don't want to say LLMs are effortless. but you might find the concept behind them is simpler than you expect. What is a large language? What it's ultimately trying to do is to finish your sentences for you, so to speak, and then keep going. It's ultimately trying to figure out, if you start a sentence in a particular way, What is a reasonable way to continue that sentence? At a basic level, LLMs are like auto-complete. Guessing what comes next. That's the magic trick. It's not thinking. It's not understanding. It's just predicting. So how does it do that? Well, it learns that. By having been fed A large amount of material. of text that people have put on the web and in books and other places. So if you want to build an AI that sounds human, you start by feeding it everything. a lot of stuff humans have created over the centuries. book, every article, every website, and every Reddit argument about who should have won an Oscar in 1997. So how much information are we talking about here? There are Maybe Five or ten billion. Pages. And there are about eight billion people on the earth. So It's kinda like a page of person. a few million bucks. And there are some other sources of training data. There are increasingly things like videos can be used, things like closed captions from videos and so on. Given there are five to ten billion web pages out there, that's trillions of words. That's a number with fifteen zeros. А фью мілья букси п'яв сцієнти папers. Coding manuals, forum posts. Anything text based that AI companies could get their hands on. As a side note This has triggered a handful of lawsuits on copyright infringement from media outlets, authors, and other stakeholders. Does not compete. So now that it's read roughly half the internet and every Reddit post, what happens? AI doesn't really read like we do. It doesn't sit back with a cup of coffee, highlight interesting passages, or pause to reflect. It does something much weirder. to crack open the black box. What's really happening inside AI when it spits out a sentence, a poem, or an argument? It all comes down to something called a neural network. A neural network is a computer idealization. Something where Like in human brains. That's some input. Like it might be a stream of text. And then there's this processing of that. Through this. Collection of artificial neurons. And the result is something like I think the next word that would be the thing to write in this essay should be such and such. Okay, and how does it do this very cool brain like thing? First, AI takes every word, or even parts of words. and turns them into numbers. These are called tokens. Tokens are the building blocks of AI language. AI doesn't really read words like we do. It understands them as numbers. If I type in, why is the sky blue? the AI doesn't see those words. It's these numbers. Y might be the token 334. is might be the token 567, etc etc. Every bit of language I feed the LLM is translated into a little number token. Next, it connects those tokens into a massive tangled web. Every word links to every other word in complex ways. The way all that information is encoded in the end. is then these numbers called weights. The number of weights. is comparable to the number of neurons in our brains. The weights are the way that information is represented in a neuromats. Think of it like this. Weights are little rules that tell the model how strongly words are connected to each other. Like How weighty is the connection between apple and pie? Pretty strong, right? How about apple and shoe? Less strong. We do that over and over for every combination of words ever, and you're getting to see how a neural network operates. Now you know what's going on inside an LM. But just to recap. AI doesn't read like we do. It sees words as numbers called tokens. Then it builds connections between those tokens using weights. Those little rules that tell AI which words go together and how strongly they go together. By doing this over and over. Більз е більш. The AI learns patterns. And when you ask it a question, it doesn't think about the answer. It just predicts the most likely next word. One step at a time. based on everything it has seen before. Alright, let's watch this prediction machine do its thing. Let's say we type into chat GPT. Why is the sky blue? The neural network doesn't actually know why the sky is blue. It's not pondering chemistry. If we ask, it will give us an answer. Because it understands? Because it's really, really good. At guessing what usually comes next. First the model predicts. Sky is Blue That's great, right? It already spits out the sky is blue because it has seen that exact pattern countless times. It knows that when a question starts with Why is the sky blue? The most likely response is exactly that. Then it looks at the weights. those invisible little markers inside the neural net. Determine how words connect. Next, it runs a quick probability check. Is the next word clear. Or dark. Five percent likely. Or Blue. Seventy percent likely. Blue winds. Then it does it again. Sky. is blue. Because Sky. Yes. Because Of Way. Sunlight. Interacts. keeps rolling forward like a snowball. Each word shaping the next. Until Damn. Yay. Here's the wild part. way you phrase your question, the exact words you type Totally changes what comes next. Every typo, every weird phrase you use. It reshapes the response. Every step it's saying, let me produce the next word. Okay. Now I take everything I've said so far. feed it back into the LM. and then say, okay, now produce the next word. So it's progressively producing words in that way. And it's not just prediction. There's randomness built in on purpose. Because if it always picked the most likely word every time. It would sound Kinda robotic. turns out that it's better to use a small amount of randomness. In not always picking the word that the LM said was most likely. turns out to produce slightly more lively text. So cunning people at these AI companies have actually programmed a wee bit of randomness. Every so often. Which makes it more human. So next time chat GPT gives you a slightly weird answer. That's just the randomness doing its job. Making AI sound a little more like us. Like us. Why are LMS so bad at some things? That seems so simple. That's a moment. Listen, learning has never been harder. The internet is overwhelmed with low quality content. clickbait with little substance, AI generated slop, and opinions masquerading as facts. Curious people like you are left sifting through instead of finding what matters. Enter 1440 Topics. We've curated the highest quality resources from across the internet. Think data visualizations, captivating videos, long form journalism, and pai them with staff written overviews to make every subject easy to understand and explore. Want to learn about venture capital or how new weight loss drugs work? Do you keep reading about CRISPR but are missing the best 101 on the breakthrough technology? All of this and so much more at join fourteen forty.com. 1440 topics, separating what's worth your time from the rest of the internet. We've covered how LMs work. Mostly they're guessing. And sometimes they guess wrong. An LM it doesn't know why the sky is blue. Thinking of our previous example. It doesn't have ideas or opinions. It just pulls from patterns in its training data, sometimes successfully. Other times you've probably seen this. It spits out an answer that sounds super confident. The more you think about it, the less sense it makes. That's because it's not actually checking for truth. It's just predicting what sounds right. We saw this firsthand at fourteen forty, while we were prepping for another episode. We asked it for well known experts in a specific field. And it gave us three completely fake people. Complete with full bios. Sounded legit. Totally made up. It even made up fake middle names. Here's the thing. It wasn't lying. AI doesn't lie the way a person does. It just doesn't know the difference between fact and fiction. It's just assembling words based on what statistically should come next. Which is why sometimes it gets things wildly wrong. It's trying to produce text. It's kind of like what it's seen before. If you ask it something that there isn't an example of that on the web, it will just sort of up something that is roughly like what? I've read And when that happens, you get what's called a hallucination. An answer that sounds perfectly confident? But it is completely wrong. Now this one surprises people. Ask an LM. What's two plus two? And it'll quickly spit out. But now, ask it some complicated pre-calculus question, and it might not get that right. They don't do well. is do precise computations. That's not how they're set up. That's not what they're built to do. What they do well is similar to what humans do well. humans do pretty well uh just yaking and talking. Because remember, LMs don't solve problems. They don't do math. They just try to predict the next word. If you ask, what's twelve times twelve? It might get that answer right. Because someone somewhere has written. Times twelve is 144. Give it a bigger, complex equation, one it's never seen before. Now it's just guessing. If you ask a human to run a piece of computer code in their head, Nobody can do that. And it's the same with large language models. They can't This is starting to change. Today, LLMs are getting helpers. Instead of fumbling through math with pure language prediction, they're being paired with actual calculators. So next time you ask a chatbot for a complicated math solution and it gets it right, that wasn't the LLM. That was a calculator. Quiet doing the heavy lifting in the background. So we've been talking about how ChatGPT and other LLMs work, about how they take everything they've read, break it into numbers, and predict what comes next. Here's the twist. The same idea is what powers AI image generation too. Think of it like this. If a language model is predicting the next word in a sentence, An image model is predicting next pixel in a picture. As always, it doesn't see the way we do. Instead, it's learned patterns from millions of images. How shadows fall, how a wet surface shines. How eyes usually look. And so on. When you ask an AI to create, say A dinosaur wearing a fourteen forty hat on a surfboard. It doesn't pull up a real photo. Build something new. Pixel by Pixel. Predicting what that should look like based on all the hats and surfboards it's ever seen. Text or images. It's sort of the same game. Prediction, probability, patterns. That instead of words, it's pixels. brings us to today. or more precisely, to 2022. For years, this technology had been simmering in the background. Growing and evolving. Mostly out of sight. A niche thing. and academic curiosity. Then, seemingly overnight. It was everywhere. So what was it about the year 2022 around that period that AI just exploded? Has ChatGPT worked? And nothing before it had. Ah yes, I remember that fall of twenty twenty two. Okay, listen to this. Very creepy, a new artificial intelligence tool is going viral for cranking out entire essays in a matter of seconds. As spectators and consumers of this wild new chatbot, We were amazed at how well it seemed to work. I remember when I first was chatting with the folks who built ChatGPT, the first question I asked was Did you know it was going to work? Their answer was absolutely not. So none of us knew it was going to work. And that's the thing, this wasn't just another incremental tech upgrade. It wasn't like getting a slightly better search engine or a new version of your phone's operating system. This build a different. Felt like a leap. History has seen moments like this before. when a technology that had been simmering in the background for years suddenly crossed an invisible threshold and changed everything. I think kind of an analogy is what happened with with the invention of the telephone. People had known. For fifty years. principle you could transmit speech. Electrically. And so on. But Alexander Graham Bell did a bunch of technological hackery. And suddenly he got to something where you could actually understand the speech. At the other end of the telephone, so to speak. It wasn't clear when that was going to happen. It wasn't clear what made that happen. До аудитории all the tech we learned the science, the history. patterns that mirror inventions like the telephone. Part of me also couldn't shake a nagging thought. At the end of the day. Aren't these really complicated machines? actually doing something really simple. They predict the next word. That's all. Strip away the layers of jargon, the billions of calculations, the neural networks. And Isn't this whole thing just a fancy autocomplete? I challenge Steven on this. So basically what you're saying is like what the element's doing. It's not that complicated. It's just like recently. Well I I don't know. It's uh you know, you ask is the is what the LM doing that complicated? You know, you could ask the same question is what brains are doing that complicated. The story of what an LM is doing is probably fairly similar to the story of what brains are doing. be more or less impressed with what brains are doing, but that's the level we've got. Maybe it's not that LLMs aren't impressive. Maybe it's that they reveal how basic we might be. Maybe we're just walking neural nuts, stringing words together, predicting what comes next and calling it thought. And Taking it a step further. What if creativity itself? Is just pattern recognition. A remix of everything we've ever read, heard, or experienced. Stitched together. In a way that feels new. is really just the next logical step. That's a lot to process. In an effort to escape this minor existential crisis and avoid spiraling into whether I'm just a glorified autocomplete. I steered our conversation toward the future. Should we be worried that these technologies are coming for Well You know it's funny because People say, oh, the AIs are going to take over everything, etc. etc, etc. And you know, I study a certain amount of history. Read these things that people wrote about AI in the early 1960s. And the thing that's really amusing is many of the paragraphs you could just lift out of the thing from 1962 and stick it in 2025 and it would fit. In other words, Apanic isn't new. We've been worrying about thinking machines for decades, and for just as long, we've imagined the worst. Killer robots, rogue superintelligence, Skynet flipping the off switch on humanity. Baby. Or remember Hal, the AI from 2001 Space Odyssey? Hello, Hal, do you read me? Hal didn't just talk. He listened. He reasoned, and he even refused. Open the pod bay doors, Hal. I'm sorry, Dave. I'm afraid I can't do that. was chilling. The idea that a machine could go beyond its programming, could lie, could manipulate? But Steven, he wasn't exactly worried about any of that. To him, the real risk wasn't AI attacking us. It was AI entangling itself with us. Because AI doesn't exist in isolation. It's not some separate force acting on the world from the outside. Connected to us. We're its users, its source of data, the thing it learns from, and the thing it adapts to, which means the most powerful system AI will ever influence. It's us. one system you will necessarily connect the AI to is humans 'cause they're the users of the AI. You know, the AI learns enough about humans that if the AI wants to convince the humans, hey, you should do this or that. That's something the AI will probably be pretty good at doing. That's the sort of flip side. of having an AI that's good at tutoring people and teaching people and so on. The AI can learn. how to teach people stuff or how to get people to do stuff. Where does that leave us? We started by saying AI is just a supercharged autocomplete. connecting words, not understanding them. It doesn't think. Doesn't reason. It just follows patterns. And yet. Even without understanding. It's powerful. Because language shapes how we think. and AI without really meaning to is shaping the way we interact with information and even each other. Here's the thing. This isn't the first time a new technology has changed the way we communicate. Printing press. Telegraph. Television, the internet. Each one reshaped the way we see the world. AI just happens to be the latest. The difference? It's not just delivering information. It's responding. Personalizing. Reflecting back what it's learned from us. So the big question isn't whether AI is thinking. how we're interacting with it. How we use it. How we question it. how we decide what role it should play. And that's all completely up to us. Thanks for listening to Fourteen Forty Explores. I'm Sony Cassom. Make sure to follow the show and leave a review on Spotify, Apple, or wherever you listen to your podcasts. And let us know what you think at podcast at join fourteen forty.com. While you're at it, start your learning journey with us at join fourteen forty.com. Subscribe to our free daily and weekly newsletters on world affairs, business and finance, society and culture, and much more. Fourteen forty explorers is a production of Rhyme Media for Fourteen Forty Media. This episode was produced by Nicolo Minoni. Our sound designer is Jay Cowett. The executive producer at Rhyme is Dan Bobkoff. and the executive producers at 1440 are me and Drew Steigerwald. See you next time.

This excerpt was generated by Pod-telligence

Listen to 1440 Explores in Podtastic

Podcast Listening Magic

All podcast names and trademarks are the property of their respective owners. Podcasts listed on Podtastic are publicly available shows distributed via RSS. Podtastic does not endorse nor is endorsed by any podcast or podcast creator listed in this directory.