The scenario was not from a film, but a briefing from some of the world’s leading AI “doomers,” who believe the technology’s unfettered development poses an existential threat.
This meeting in February marked a tentative convergence of two powerful, and historically distant, strands of American skepticism toward AI. On one side are figures like Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute, whose warnings of AI-driven human extinction are backed by Silicon Valley billionaires and the data-driven philosophy of effective altruism. For over a decade, this faction has focused on long-term, catastrophic risks.
On the other side are populist leaders like Sanders, whose concerns are more immediate and economic. They warn of mass job losses, the exploitation of public resources by data centers, and the societal impact of AI on children. Sanders has taken a leading role, pushing for a moratorium on new AI data centers and framing the issue as one of corporate power and worker displacement.
The fact that Sanders is now engaging seriously with the existential arguments of the “doomers” suggests a potential political realignment. “I know there have been a lot of science fiction novels and movies,” Sanders said afterward, “but these guys no longer think that this is science fiction.” This nascent alliance represents a formidable, if fragile, coalition that could reshape the political debate over AI regulation.
A Coalition of Convenience, Fraught with Distrust
For all their shared apprehension, the two groups are divided by fundamental ideology and priority. The tech-adjacent safety advocates often operate with funding from the very billionaire class that populists like Sanders distrust and campaign against. Their focus on speculative, world-ending scenarios can seem abstract compared to the bread-and-butter issues of jobs and infrastructure that motivate the populist base.
Conversely, the long-term risk community has historically viewed the economic and social concerns of populists as secondary to the primary goal of preventing an AI apocalypse. Bridging this gap requires each side to accept the other’s core premises as valid, a significant hurdle given their deeply ingrained worldviews.
Yet the pressure to collaborate is growing as the pace of AI development accelerates. A united front could wield substantial political influence, combining the technocratic credibility and financial resources of the safety movement with the grassroots energy and legislative heft of populist politicians. Their combined efforts could push for more aggressive regulatory frameworks than either could achieve alone.
The ultimate success of this unlikely alliance hinges on a simple, unresolved question: whether mutual fear of artificial intelligence can overcome mutual distrust. If it can, they may yet dictate the terms of America’s AI future. If it cannot, their fragmented opposition may allow the technology’s development to continue unchecked by meaningful political constraint.