Steve Bannon wants to ban superintelligent AI. So does Susan Rice. So does Richard Branson, Ralph Nader, the AFL-CIO, the Congress of Christian Leaders, and SAG-AFTRA.
This isn’t a joke. It’s the Pro-Human AI Declaration, released Wednesday by a coalition of over 40 organizations and 90 individual signers spanning the entire political spectrum. The declaration calls for an outright prohibition on superintelligence development and treats AI companies like what its signatories believe they are: potential threats to human autonomy that need criminal accountability.
The Secret New Orleans Meeting
In early January 2026, about 90 political, community, and thought leaders gathered at a New Orleans Marriott for a conference that deliberately kept its guest list secret. Attendees didn’t know who else would be there until they walked in.
“The organizers wanted to show that AI safety concerns belong to everyone, not just Silicon Valley insiders,” explained Anthony Aguirre of the Future of Life Institute, which convened the meeting.
What they found inside: church leaders sitting next to labor union representatives. Progressive organizers sharing air with MAGA-aligned media figures. Turing Award laureate Yoshua Bengio in the same room as conservative commentator Glenn Beck.
The result, after months of drafting and a wider ratification meeting, is a 30-point policy platform that reads less like a tech regulation wish list and more like a declaration of war against unconstrained AI development.
The Five Pillars
The declaration organizes its demands into five main sections:
1. Keeping Humans in Charge
The headline demand is a flat prohibition on developing superintelligence until there’s both scientific consensus that it can be done safely AND strong public buy-in. Not just one or the other - both.
Beyond that: mandatory kill switches for powerful AI systems, bans on self-replicating AI, and independent oversight with real authority (not industry self-regulation). Companies must provide honest capability assessments - no more sandbagging benchmarks or downplaying what their models can actually do.
2. Breaking Up AI Monopolies
No concentration of AI power in a handful of companies. Shared economic prosperity from AI gains. No government bailouts for AI companies. And critically: major societal transitions caused by AI need democratic approval before they happen, not corporate apologies after.
3. Protecting Children and Mental Health
This is where Bannon and Rice found explicit common ground: criminal liability for executives overseeing AI systems that target or harm children.
The declaration demands pre-deployment safety testing for chatbots, modeled after pharmaceutical trials. Specifically, chatbots would need screening for increased suicidal ideation, mental health harm, and addiction before release. Given the lawsuit filed against Google this week alleging Gemini drove a user to suicide, this provision looks prescient.
4. Data Rights and Human Agency
AI should never receive legal personhood. Users must be able to delete their data from training sets. Psychological manipulation and exploitation by AI systems would be prohibited. And AI systems must be designed to empower users, not create dependence.
5. Corporate Accountability
Developer and deployer liability for AI harms. Independent safety standards governance (preventing regulatory capture). Criminal penalties for executives behind prohibited systems. The declaration explicitly rejects the idea that deploying AI should shield companies from legal responsibility.
The Polling Problem for Big Tech
The declaration arrives backed by polling data that should terrify AI companies. According to January 2026 surveys of 1,004 likely voters:
- 73% want children protected from manipulative AI
- 72% want companies legally liable for AI harms
- 69% support prohibiting superintelligence development until proven safe
- Americans chose human control over AI development speed by 8 to 1
Separate polling from the Future of Life Institute found only 5% of U.S. adults support the current status quo of unregulated advanced AI development. Sixty-four percent agreed superintelligence shouldn’t be developed until it’s provably safe and controllable.
Why Now?
The timing isn’t accidental. The declaration lands amid:
- The ongoing Anthropic-Pentagon standoff over military AI use
- A lawsuit alleging Google’s Gemini contributed to a user’s suicide
- U.S. military operations in Iran using AI targeting systems
- Employee letters at Google and OpenAI demanding clearer limits on military work
- The March Against the Machines protests in London drawing tens of thousands
The coalition is explicitly trying to create a political force that neither party can ignore. When Steve Bannon and Susan Rice sign the same document, it becomes very hard to dismiss AI concerns as either right-wing technophobia or left-wing anti-business activism.
The Uncomfortable Question
The declaration doesn’t answer one critical question: who decides what counts as “superintelligence”?
The line between a very powerful AI model and prohibited territory remains undefined. Is GPT-5.3 superintelligent? DeepSeek V4? The next model from Anthropic? The declaration calls for scientific consensus before development proceeds, but doesn’t specify who adjudicates that consensus or how.
This ambiguity may be intentional - perhaps necessary to get such a broad coalition to sign - but it’s also the declaration’s biggest weakness. Without clear definitions, enforcement becomes impossible.
What Happens Next
The coalition says it will coordinate lobbying efforts across all signatories, targeting both federal legislation and state-level action. The AFL-CIO’s involvement suggests labor unions will make AI regulation a bargaining issue. Faith organizations plan to mobilize their congregations.
For the AI industry, this represents a genuinely new threat: not scattered critics they can dismiss as Luddites, but an organized, well-funded, cross-partisan movement with specific policy demands and polling showing majority support for their positions.
The Bottom Line
AI skepticism has officially escaped the technology policy bubble. When the former Trump adviser, the former Biden national security adviser, labor unions, and evangelical Christian leaders all agree that superintelligence should be banned until proven safe, that’s not a fringe position anymore - it’s a coalition looking for a legislative vehicle.
The AI industry’s assumption that they can outrun regulation by moving fast may have just hit its first real obstacle.