There’s a thought experiment that has taken on almost mythic status among a certain group of technologists: If you build an artificial intelligence and give it a seemingly innocuous goal, like making as many paper clips as possible, it might eventually turn everything — including humanity — into raw material for more paper clips.
Absurd parables like this one have been taken seriously by some of the loudest voices in Silicon Valley, many of whom are now warning that AI is an existential risk, more dangerous than nuclear weapons. These stories have shaped how billionaires including Elon Musk think about AI and fueled a growing movement of people who believe it could be the best or worst thing to ever happen to humanity.
So what exactly should we actually be worried about when it comes to AI?
In Good Robot, a special four-part podcast series launching March 12 from Unexplainable and Future Perfect, host Julia Longoria goes deep into the strange, high-stakes world of AI to answer that question. But this isn’t just a story about technology — it’s about the people shaping it, the competing ideologies driving them, and the enormous consequences of getting this right (or wrong).
For a long time, AI was something most people didn’t have to think about, but that’s no longer the case. The decisions being made right now — about who controls AI, how it’s trained, and what it should or shouldn’t be allowed to do — are already changing the world.
The people trying to build these systems don’t agree on what should happen next — or even on what exactly it is they’re creating. Some call it artificial general intelligence (AGI), while OpenAI’s CEO, Sam Altman, has talked of creating a “magic intelligence in the sky” — something like a god.
But whether AI is a true existential risk or just another overhyped tech trend, one thing is certain: the stakes are getting higher, and the fight over what kind of intelligence we’re building is only beginning. Good Robot takes you inside this fight — not just the technology, but the ideologies, fears, and ambitions shaping it. From billionaires and researchers to ethicists and skeptics, this is the story of AI’s messy, uncertain future, and the people trying to steer it.