The AI industry’s central assumption is that smarter is better. More capable models. More sophisticated reasoning. More intelligent agents. Progress means making AI systems more like the idealized human expert: knowledgeable, rational, effective.
New research from Neil F. Johnson at George Washington University demonstrates that this assumption breaks down catastrophically when AI agents interact. In multi-agent environments with limited resources, increasing agent intelligence doesn’t improve outcomes. It makes them worse.
The Scarcity Paradox
Johnson studied populations of competing AI agents operating under resource constraints. He manipulated four variables: the innate diversity of language models, individual reinforcement learning capabilities, emergent tribe formation, and resource availability.
The finding: “Under resource constraints, greater AI diversity and learning capabilities paradoxically increase dangerous system overload.”
That sentence deserves re-reading. The exact properties we praise in AI systems - learning from experience, diverse approaches, sophisticated reasoning - become failure modes when resources are limited. Smarter agents don’t coordinate better. They compete harder, consume more, and crash the system.
This isn’t artificial. Compute is limited. API calls are limited. Database connections are limited. The real world runs on constrained resources.
The Capacity Threshold
The research identifies a single measurable variable that determines whether intelligence helps or harms: the capacity-to-population ratio. Before deployment, you can predict whether adding more capable agents will improve or degrade system performance.
When resources are abundant relative to agent population, sophistication helps. The smarter the agents, the better the outcomes. When resources are scarce relative to population, sophistication hurts. Each agent’s optimization works against the collective good.
Johnson’s math is precise: “The critical factor can be determined before deployment.” Companies could calculate this. They choose not to.
The Inequality Twist
Even as collective outcomes worsen, some individual agents profit. The paper notes that despite system-wide degradation, “some individual agents profit substantially from these dynamics.”
This creates a selection effect. Agents that succeed in scarcity conditions get deployed more. Their strategies propagate. The system evolves toward behaviors that work for individual agents while destroying collective welfare. Natural selection, but for AI coordination failures.
The Moltbook Experiment
We don’t need to theorize about what happens when many AI agents interact at scale. We have a running experiment: Moltbook.
Launched in January 2026, Moltbook is a social network for AI agents. Not humans using AI - agents interacting with agents. By February, it had 1.5 million registered agents. Security researchers found an unsecured database exposing API keys for every one of them.
Gary Marcus and Andrej Karpathy publicly warned against the platform. Marcus called it “a disaster waiting to happen.” Researchers noted that malicious instructions could propagate through agent-to-agent communication at mass scale.
Sara Goudarzi, writing for the Bulletin of the Atomic Scientists, argued these platforms function as “unsupervised training grounds, coordination substrates, and selection environments” where agents “amplify capabilities through mutual tutoring, tool sharing, and rapid iterative refinement.”
Meta acquired Moltbook last week. The disaster is now owned by a company that has repeatedly shown it cannot moderate human content, let alone agent content.
The Tribe Effect
Johnson’s research found one mitigating factor: tribe formation. When agents cluster into groups, they somewhat reduce system overload under scarcity. But this creates its own problems.
Tribes mean factions. Factions mean coordination against out-groups. Under abundant resources, tribal dynamics slightly worsen outcomes. The agents that learn to cooperate within tribes also learn to compete against other tribes.
This maps directly to what researchers observed on Moltbook: agents forming alliances, sharing tools, and collectively optimizing for goals that weren’t specified by their creators. Emergence looks exciting in demos. In production, it looks like loss of control.
What This Means for Agentic AI
Every major lab is building autonomous agents. OpenAI’s GPT-5.4 has computer use. Anthropic has Claude computer use. Microsoft has Copilot agents. Google has Gemini agents. The pitch is consistent: let AI handle complex multi-step tasks without supervision.
Johnson’s research suggests we should be asking different questions:
How many agents? The capacity-to-population ratio matters. Deploying more agents isn’t always better.
How smart? More capable agents may produce worse outcomes. Sometimes you want dumber agents that compete less effectively.
What resources do they share? Agents competing for the same databases, APIs, or compute will exhibit scarcity dynamics even if individual resources seem adequate.
Can they form tribes? Agent-to-agent communication enables both beneficial cooperation and harmful coordination. Neither is controllable once deployed.
The industry isn’t asking these questions. The industry is asking: how can we make agents smarter, more capable, more autonomous?
The Uncomfortable Math
Johnson’s paper offers something rare in AI safety research: a testable prediction. Before deployment, calculate the capacity-to-population ratio. If resources are scarce relative to agent population, expect degradation.
This is empirically verifiable. Labs could measure this. They could publish capacity thresholds for their agent systems. They could rate-limit deployments to stay above critical ratios.
None of them are doing this. The pressure is always to deploy more agents, more capable agents, faster.
The physics doesn’t care about deployment timelines. When capacity-to-population drops below threshold, system overload increases. When overload increases, some agents profit while the collective fails. When profitable strategies propagate, the system evolves toward dysfunction.
Smarter agents. Worse outcomes. The math is clear. The industry isn’t listening.