The Perils and Possibilities of AI
President Truman used to joke about how his economic advisors were always telling him, “on the one hand this… but on the other hand that.” What I need, he said, “is a one-handed economist.”
It’s a good line. But he knew some issues are too complex, too uncertain, and too consequential to be constrained to a single perspective. Government intervention in the economy is a classic example, and AI dwarfs it. This isn’t just another area of policy where we can try one thing, then course-correct. If we get this wrong, it could be disastrous. For the economy, yes. But also for education, democracy, freedom, and maybe even the future of our species.
The truth is, AI isn’t just a two-sided coin. It’s more like a multi-faceted gem, where each surface represents a unique, independent path to peril or promise. The fears aren’t all the same, and if one “hand” clutches us in a certain kind of dystopia, it likely precludes the others. Nor are the potential solutions all the same, or clear. Even the best-case scenarios leave many questions open to debate.
Let us consider a few of the hands before us.
Technological Changes
It’s tempting to look to the past for guidance. “AI is just a new tool,” some are inclined to say. Electricity. Aviation. Antibiotics. These were breakthroughs—huge ones—with world-changing results. “AI is like them, right?”
“No,” more insightful individuals will offer a correction, “we should look to technologies with a more complicated legacy, like fossil fuels and nuclear power.” Useful. Profitable. Game-changing. But also destabilizing in dangerous and sometimes irreversible ways. Yes, these can give us some insight into AI. But they fall short of what we need.
Because AI is not just another tool. It’s not a smarter version of what came before. It’s something unlike any technology humanity has ever created. It’s a tool that thinks. Or, at the very least, mimics thought in ways that can surpass us, deceive us, and outmaneuver us. We’re not just building a faster calculator. We’re building something that can outplay us at our own games, sometimes literally.
An Alien Takeover
Historian Yuval Noah Harari has been warning about this shift for years, and his most recent book, Nexus: A Brief History of Information Networks from the Stone Age to AI, draws a clear line between the past and what’s now emerging. AI, he argues, is not artificial. It’s alien. It doesn’t think like us. It doesn’t learn like us. And it doesn’t need to. AI is not merely a super-efficient tool or a powerful weapon. It is something fundamentally different. Even the greatest threat to civilization created in the twentieth century, nuclear bombs, could not decide to deploy themselves. Nothing we have previously made could decide anything, but AIs can make decisions and will likely do this more and more as they progress.
Take the ancient game of Go. People have been playing it for thousands of years in Asia. We thought we had mastered it. We thought we understood the possibilities. But when an AI started making moves that no human had ever tried—strategies that initially looked like mistakes—they turned out to be brilliant. This wasn’t just a win. It was a wake-up call to an intelligence operating on wholly different principles.
These systems are already showing us patterns we don’t see, finding connections we don’t make, and generating text, images, and sounds that feel human but aren’t. What might they conclude is justified or necessary in the future? This is the fear that AI might develop its own agency, make independent decisions, and, as Harari puts it, shift control and power from human beings to “alien intelligence.” Whether that future looks like extermination or domestication—whether we’re seen as threats or pets—the result is the same: we lose control. It’s the “Terminator” scenario, where AI itself becomes the threat, acting against humanity for its own, incomprehensible reasons, or the chilling “People Zoo” scenario, where humanity is preserved but managed, stripped of self-determination. If this hand takes hold, other fears about human misuse will not matter. But it need not be that extreme to still be alarming. We are entering what Harari calls “the end of human history.” We need not die off or be locked up. We may even live enjoyable lives. But we will no longer be the only ones calling the shots and shaping the future. At the very least, we will be sharing history with AIs.
When Bad Actors Wield AI
Harari’s concerns go beyond what the machines might do on their own. It’s what people will do with them that may prove to be the greatest danger. At least, in the short term. In this equally terrifying hand, a few humans remain firmly in control, but use AI as an unparalleled instrument for domination.
AI combined with biotechnology, he warns, could lead to “digital dictatorships,” in which those in power use AI to overwrite democracy, manipulate emotions, and rewrite reality in real time.
“We are no longer mysterious souls. We are now hackable animals.”
In a world of surveillance capitalism, deepfakes, algorithmic manipulation, and invisible recommendation systems shaping what we see and believe, that doesn’t feel like a metaphor.
The Job Reckoning
Then there’s the very tangible hand of economic disruption. While some predict AI won’t significantly threaten jobs in the short term, others, like Harari, warn of immense danger during the transition period. It’s not just about losing jobs, it’s about the spatial issue: entire nations and regions could lose their economic base, creating pockets of profound despair and instability while more tech savvy areas of the planet prosper. We could see mass unemployment on an unprecedented scale, leading to pockets of social and political disruption far beyond anything we’ve faced in the past. Such unrest could destroy any AI future and leave only revolutions and violent conflicts in its wake.
The Erosion of Truth
Beyond jobs, there’s a distinct threat to the very fabric of our society: the collapse of trust. Harari argues that modern liberal democracy, built on specific information technologies—from newspapers to radio, TV, and the Internet—may not survive if AI fundamentally changes how we share and consume information even more than it already has. With AI capable of generating hyper-realistic fake content, impersonating humans online, and spreading disinformation with superhuman efficiency, how can democratic discourse even function? If we can’t tell what’s real from what’s generated, if every voice can be faked, conversation itself, the foundation of a shared reality and political action becomes impossible.
This scenario presents its own unique risks: not physical harm or economic collapse, but a descent into an unnavigable hall of mirrors. In a 2023 conversation with Harari, even the more optimistic Mustafa Suleyman—co-founder of DeepMind and Inflection AI—agreed there is an urgent need to ban impersonation by digital people, but both men seemed to believe that cooperative action on a global basis is unlikely. At best, we may create a “coalition of the willing” to combat this unique and insidious danger. We may even be able to harness AI to detect these lies and nullify them, but has trust already been so eroded that the majority will not accept these detections as real?
Systemic Collapse
There’s a moment in The Feed, the dystopian novel by Nick Clark Windo, where the narrator casually explains why people don’t need to learn anything anymore. The information is just there. Instantly available. Always accessible.
“You can look things up automatic, like what battles of the Civil War George Washington fought in and shit.”
It’s meant to be funny, for a moment, but the weight grows heavier the longer you live with it. The Feed vividly portrays a world where ubiquitous technology, while offering immense convenience, subtly undermines the very human capacity for knowledge and independent thought, leading to a chilling dependency.
Despite having access to more information than any generation in history could have imagined was possible, we seem to be growing less informed with each passing tweet. Not only about basic facts, but about how to think, how to question, and how to judge what’s real. Now, when public engagement is paramount, we are sliding in the wrong direction, because it’s easy.
Let AI do it. Let the app summarize it. Let the autocomplete finish the thought.
If this current trend continues, we could get the world of The Feed, or worse. A future where people can’t tell you what happened in the past, or the fundamentals of anything, not because they’re unintelligent, but because they were taught not to bother. And with no past, no foundations, you have nothing to build upon. The more students turn to AI to do their schoolwork for them, and the more educators turn to lazy uses of AI—creating and grading simplistic assignments—the less equipped future generations will be to harness the wonders this technology might make possible. Because wonders, like wisdom, aren’t automatic, and you can’t prompt your way to them.
It brings to mind Aldous Huxley’s iconic novel, Brave New World, in many ways, but I keep coming back to The Machine Stops, E.M. Forster’s strange little dystopian story from 1909 as perhaps the most prophetic work on the subject. In it, people live underground, isolated from one another, each in their private room where they are completely dependent on a great machine to provide for their basic needs and fulfill their limited desires. They praise it, pray to it, submit their lives to it. Over time, the machine goes into a slow and quiet collapse, like AI systems that spiral into gibberish and diminishing returns without human interaction. But when Forster’s machine breaks, no one knows how to fix it. No one even knows how it works anymore. They avoid the surface of the earth because it’s a polluted wasteland, but when everything falls apart, a return to the surface is the only option for the survivors.
What happens next? Hopefully, we will never be forced to find out.
On the Other Hand: Hope and Progress
Of course, that’s only part of the story.
For every fear that AI will be misused or misunderstood, there are hopes that it will be the thing that finally helps us solve our biggest problems. Hunger, disease, poverty, climate change, cancer, fresh water reserves. The list is endless. The people building AI systems are more than confident that we are headed for greater and greater breakthroughs, not breakdowns. And this isn’t just bluster or wishful thinking.
Demis Hassabis, the co-founder of DeepMind, has often said that the point of AI isn’t to replace human intelligence but to amplify it. He points to projects like AlphaFold, which used AI to predict the 3D shapes of proteins, a challenge that stumped biologists for decades. That one advance has already accelerated drug discovery and reshaped parts of molecular biology.
Fei-Fei Li, a leading voice in AI ethics and education, believes we can make AIs that are “human-centered.” Systems that work with people, not against them. That uplift us, not dumb us down. She’s focused on how AI can assist in classrooms, in hospitals, and in places where resources are limited but the needs are great.
And others, like OpenAI’s head Sam Altman, describe a possible future of post-scarcity. A world where machines do the labor and humans do the living. Work becomes optional; wealth becomes distributed; creativity becomes abundant. Utopian? Maybe, but maybe not.
Even now, before we’ve reached anything close to superintelligence, AI is saving lives. It’s reading radiology scans and catching signs of cancer earlier than doctors can. It’s helping predict wildfires, floods, and hurricanes, giving people more time to evacuate. It’s assisting blind users with navigation and helping people with speech impairments communicate in ways they never could before. And the other day on LinkedIn, I saw a story about a 17-year-old boy who used AI and about $300 to make a robotic arm that responds to your thoughts, just as a real arm does.
None of this should be brushed aside. These are substantial, tangible benefits, already in the world, changing lives for the better.
So, on the other hand, maybe AI really is the greatest leap forward we will ever know. An overwhelmingly positive turning point in the human story. That’s the hope. But hope, like fear, needs to be tested in the laboratory of time.
The Unseen Path
AI is a genie we can’t put back in the bottle, or if you don’t like magical metaphors, toothpaste we can’t squeeze back into the tube.
The only large-scale society I know of that successfully held off a transformative technology was Japan, which banned guns for more than two centuries because they threatened samurai culture. But even that didn’t last. And this isn’t muskets. This is AI. This is software. This is the entire world working on the same idea at the same time, and advancements being shared at lightspeed.
There’s no stopping it. Our only hope is shaping it.
We can choose to stay involved, to keep AI human-centered, and to remain vigilant about who’s in charge and what they are allowed to do. Or we can sit back and let someone—or something—else take over.
There is no clear reason to conclude that AI will become conscious or malevolent, other than the fact that sci-fi books and movies keep telling us this. But there are plenty of reasons to fear that control will be handed to the wrong people. Or worse, that we’ll forget to hold on to the controls altogether.
My fear is that once one of these distinct paths becomes undeniably clear, or once a scenario we have yet to envision takes hold, it may be too late to choose a different one. My hope is that once we can clearly see the definitive path, we’ll know we’re finally out of the woods, having navigated the future successfully.
Leave a Reply