AI is being sold to the public as a gateway to a frictionless, post-scarcity future—but to media theorist Douglas Rushkoff, that narrative hides something much darker: a profound anxiety among the tech ultra-elite about the world they are actively helping to destabilize.
Rushkoff, a professor of media theory and digital economics at Queens College, CUNY, and author of books like *Survival of the Richest* and *Team Human*, recently argued that the utopian AI story functions less as a genuine vision for humanity and more as a cover for an escape plan. Speaking on the Repatterning Podcast with host Arden Leigh, he described today’s AI evangelism as a “smokescreen” for a class of billionaires who see themselves not as stewards of society, but as potential survivors of its breakdown.
In his view, the loudest AI boosters are not truly obsessed with “saving the world,” despite their polished rhetoric about solving climate change, curing disease, or ending poverty. Instead, they are preoccupied with securing their own safety in the very future they are helping create—a future marked by automation-driven unemployment, destabilized institutions, and concentrated power in the hands of a few tech platforms.
“The billionaires are afraid of being hoisted on their own petard,” Rushkoff said. “They are afraid of having to deal with the repercussions of their actions.”
The fear, as he describes it, is not abstract. The same executives who promise that AI will enhance human potential are also the ones investing in fortified compounds, private security, alternative citizenships, and speculative plans to retreat to isolated territories or even other planets. Behind the glossy marketing campaigns and keynote speeches lies a consistent pattern: protect the winners, insulate them from risk, and treat everyone else as a variable to be managed.
Rushkoff points to high-profile figures—such as Meta’s Mark Zuckerberg and OpenAI-linked leaders like Sam Altman—as emblematic of a broader mindset within the billionaire class. They promote AI as an inevitable, almost spiritual leap for civilization, while simultaneously backing projects and policies that further centralize control over infrastructure, data, and capital. The contradiction, he suggests, is not a bug but the core of the story: public optimism, private fear.
Part of what makes AI utopianism so effective, according to critics like Rushkoff, is its language. It frames enormous, disruptive experiments in benign terms: “efficiency,” “innovation,” “augmentation,” “democratization of knowledge.” This lexicon glosses over the tangible costs—factory workers, coders, designers, call-center employees, writers, and countless others who may find their work automated or devalued long before new forms of employment materialize.
Economists and technologists wary of the hype note that AI’s benefits are unlikely to be distributed evenly. The massive infrastructure required to train and run advanced models—data centers, specialized chips, power-hungry cooling systems—demands colossal capital expenditures. Those costs are mainly borne by, and beneficial to, a handful of dominant firms with access to global capital markets and governmental goodwill. The result is a system in which the profits from automation accrue at the top, while the risks—job loss, social unrest, environmental strain—disperse downward.
Rushkoff’s argument fits into a broader critique of what might be called “Silicon Valley transcendentalism”: the idea that technology will lift humanity out of its historical problems rather than deepen existing inequalities. AI is simply the latest and most powerful chapter in that narrative. The promise is that algorithms will manage everything more rationally than humans ever could—allocating resources, optimizing production, even moderating social conflict.
But this technocratic dream, he suggests, often ignores or erases democratic questions: Who controls the algorithms? Who owns the data they’re trained on? Who decides what “optimization” means? And who gets left behind?
Behind the marketing campaigns, Rushkoff sees something more primal: wealthy technologists who suspect that the systems they’ve built—hyper-financialized markets, extractive digital platforms, and now AI-driven automation—are inherently unstable. They understand that widening inequality, precarious work, and the erosion of public institutions create volatility. Instead of using their money and influence to genuinely rebalance the system, many are dedicating their resources to “resilience” for themselves: bunkers, off-grid estates, experimental cities, and long-term survival plans.
In this light, AI utopianism works on multiple levels. Publicly, it calms anxieties by presenting disruption as progress and displacement as an inevitable step toward a better tomorrow. Privately, it buys time and legitimacy for those benefiting most from the transition, allowing them to accumulate more capital, more data, and more leverage before the social bill comes due.
AI hype also obscures the sheer material demands of the technology. Behind every charming chatbot and image generator lies an energy-intensive infrastructure: server farms drawing on power grids, water usage for cooling, sprawling supply chains for chips and hardware, and an enormous amount of human labor in data labeling, content moderation, and system maintenance. The term “artificial intelligence” can make it sound immaterial and clean, when in practice it is built on very physical resources and often invisible workers.
That disconnect, Rushkoff would argue, is not accidental. By portraying AI as ethereal and inevitable, tech leaders deflect questions about regulation, labor protections, environmental impact, and antitrust concerns. If AI is framed as a kind of unstoppable natural force—or even a quasi-religious destiny—then opposing its rollout becomes akin to opposing the future itself.
The narrative also fosters a particular psychology among everyday people: resignation. If we are told that AI will “replace” entire sectors no matter what anyone does, public debate narrows from “Should we do this?” to “How can I personally survive this?” That shift—from collective decision-making to individual survival strategies—is exactly what benefits those already in power. It mirrors the mindset Rushkoff identifies among the billionaire class: planning escapes rather than seeking reforms.
There is also a subtle moral repositioning at work. AI evangelists often portray themselves as reluctant heroes, forced by their own genius to push humanity forward, even if the journey is painful. Jobs will vanish, they concede, but that is the price of progress—and, conveniently, a justification for their own concentration of wealth and authority. By casting themselves as stewards of an inescapable future, they sidestep accountability for the specific choices they make about how AI is built, deployed, and governed.
Critically, Rushkoff doesn’t deny that AI can be powerful or useful. The question, for him and many other skeptics, is not whether the technology has potential, but who gets to define its purpose. An AI system used to assist doctors, support teachers, or improve public infrastructure is radically different from one optimized for ad targeting, financial speculation, or mass surveillance. Yet the utopian sales pitch tends to flatten these distinctions under one banner: “AI will make everything better.”
This is where the myth of inevitability becomes dangerous. When the future is presented as pre-written, democratic oversight looks like a nuisance rather than a necessity. But there is nothing inevitable about how AI is integrated into workplaces, governments, or everyday life. Policies, labor movements, public pressure, and alternative business models can all shape whether the benefits are shared or hoarded, whether AI augments human capabilities or replaces them to pad corporate margins.
The fear Rushkoff attributes to billionaires is, in part, a fear of that reckoning. If AI accelerates inequality and instability, the people most visibly responsible—those who funded, built, and aggressively promoted these systems—may eventually face backlash. Rather than confront that possibility through redistribution, structural reform, or genuine power-sharing, many appear to be doubling down on mechanisms of control: predictive policing, algorithmic management of workers, automated content filtering, and concentrated data ownership.
Yet their own rhetoric betrays their anxiety. When leading figures talk openly about AI as an existential risk, a force that could run out of control or destabilize civilization, they often position themselves as the only ones competent enough to manage it. The story becomes: “AI is dangerous, but trust us, we’re the ones who can contain it”—even as they race to deploy more powerful systems. It is a paradoxical strategy that both amplifies fear and consolidates authority.
For ordinary people, the challenge is to see through both the doom and the utopia. AI is neither an automatic apocalypse nor a guaranteed paradise. It is a set of tools and infrastructures being built right now by specific institutions with specific incentives. The questions that matter are concrete:
– Who owns the AI systems we rely on?
– How are workers affected, and what protections do they have?
– How are communities consulted—or ignored—in decisions about automation?
– What public investments are being made in education, retraining, and social safety nets as AI reshapes industries?
Without addressing these issues, Rushkoff warns, AI will likely reinforce the same dynamics that already define the digital economy: wealth piling up at the top, risk pushed down, and a small group of decision-makers insulating themselves from the consequences. The gleaming visions of AI-enabled abundance will continue to circulate, not as realistic blueprints for a shared future, but as a kind of ideological cover for an elite that fears what happens when the rest of society realizes how the game has been rigged.
Ultimately, his critique is not a rejection of technology, but of the story being told about it. The real danger, he suggests, is allowing those with the most to gain from AI to control not just the platforms and infrastructure, but the narrative about what this revolution means. As long as AI is framed primarily as a utopian project managed by benevolent visionaries, the underlying power structures will remain invisible—and the fears driving its biggest champions will remain unexamined, even as they quietly shape our collective future.
