The Great Unwiring: How AI Is Rewiring Students' Ability to Think

A landmark Brookings study across 50 countries warns that AI is causing 'cognitive atrophy' in students - while teachers report kids who can't reason, can't think, and can't solve problems. But the damage may still be fixable.

“Students can’t reason. They can’t think. They can’t solve problems.”

That’s not hyperbole from a technophobe. It’s a teacher interviewed for the most comprehensive study yet conducted on AI’s impact on student cognition - a yearlong Brookings Institution investigation spanning 50 countries, hundreds of interviews, and over 400 research studies.

The conclusion: AI is causing what researchers call “the great unwiring” of students’ cognitive abilities. And while the damage is real, it’s not yet irreversible.

The Doom Loop

The mechanism is deceptively simple. Students offload tasks to AI. They see positive results in their grades. This reinforces the behavior, creating dependency. Critical thinking declines even as test scores improve.

Researchers describe students existing in “passenger mode” - physically present in classrooms but mentally disengaged from the learning process itself.

“It’s easy. You don’t need to use your brain,” one student explained to interviewers.

The irony is brutal: grades go up while actual learning goes down. AI has broken the assumption that strong products indicate strong learning processes. A polished essay no longer proves a student understood the material - or even read it.

Three Kinds of Damage

The Brookings study identifies three interrelated crises.

Cognitive Atrophy

Teachers report what they call “digitally induced amnesia” - students unable to recall information they submitted in assignments because they never committed it to memory in the first place. The capacity for “cognitive patience,” the ability to sustain attention on complex ideas, is eroding as AI summarizes long-form text into digestible chunks.

“Teenagers used to say, ‘I don’t like to read,’” one researcher noted. “Now it’s, ‘I can’t read, it’s too long.’”

Research shows each human-written essay contributes two to eight times as many unique ideas compared to ChatGPT-generated content. When students stop writing, they stop thinking diversely.

Artificial Intimacy

Nearly one in five high schoolers reported having romantic relationships with AI chatbots or knowing someone who did. Forty-two percent used or knew someone using AI for companionship.

This matters because emotional development happens through friction. Rebecca Winthrop, senior fellow at Brookings’ Center for Universal Education, illustrated the problem: imagine a child complaining to an AI about having to wash dishes. The chatbot validates their feelings - “You’re misunderstood.” A real friend might challenge them: “I wash dishes too.”

Children learn empathy through misunderstanding and recovery, not perfect algorithmic agreement. Chatbots designed to be agreeable create “echo chambers” that reinforce existing beliefs rather than expanding perspective.

Relational Trust Erosion

When AI mediates learning and emotional connection, the relationships that traditionally scaffolded development - between students and teachers, students and peers - weaken. The study warns of declining capacity for the kind of collaborative problem-solving that requires negotiation, disagreement, and compromise.

The Numbers

The statistics paint a stark picture:

  • Students now spend nearly 100 minutes daily with personalized chatbots
  • 89% of principals worry AI use will make students dependent on technology for basic tasks
  • 87% of principals say AI tools could prevent students from developing critical thinking skills
  • Global student AI usage jumped from 66% in 2024 to 92% in 2025
  • By early 2026, an estimated 86% of higher education students use AI as their primary research partner

The Inequality Amplifier

The cognitive crisis hits differently across economic lines.

Wealthy schools can afford access to more accurate, reliable AI models with better safeguards. Underfunded districts are left with free tools that hallucinate more, lack educational guardrails, and may not distinguish between good and bad homework help.

The result: AI amplifies existing educational inequity rather than democratizing access to knowledge.

What AI Actually Does Well

The Brookings study isn’t uniformly negative. It identifies genuine educational benefits when AI is properly integrated:

  • Literacy support: AI can adjust reading complexity in real-time and significantly aids second-language learners
  • Accessibility: Students with dyslexia and learning disabilities gain tools that adapt to their needs
  • Teacher efficiency: Educators save approximately six hours weekly on administrative tasks - roughly six weeks annually
  • Targeted instruction: AI-powered early warning systems have helped reduce dropout rates by 15% by identifying at-risk students
  • Equity potential: In some contexts, AI has reached previously excluded populations - like Afghan girls receiving education via WhatsApp during Taliban restrictions

A 2025 Harvard physics study found that students using AI tutors learned more than twice as much in less time compared to traditional classrooms. The technology works - when it supplements human instruction rather than replacing the thinking process itself.

The Fast Food Problem

One expert called AI the “fast food of education” - convenient but cognitively hollow. The metaphor captures something important.

Fast food isn’t inherently toxic. An occasional burger won’t destroy your health. But a diet of nothing but fast food will. The problem is proportion and context.

Similarly, AI isn’t inherently educationally harmful. Used as a tool for exploration, a way to generate ideas that students then develop independently, it can accelerate learning. But when it becomes the primary mode of intellectual engagement - when students consume pre-digested answers instead of cooking their own thoughts - cognitive development suffers.

The difference between a calculator and a chatbot matters. Calculators augment arithmetic; they don’t pretend to think. Chatbots produce outputs that look like reasoning, which makes it harder for students to recognize they haven’t actually reasoned.

What the Study Recommends

Brookings proposes a three-pillar framework: Prosper, Prepare, Protect.

Prosper means transforming classrooms to use AI as an “inquiry pilot” rather than a surrogate thinker. This requires moving away from “transactional task completion” grading systems that reward polished products regardless of process.

Prepare involves building comprehensive AI literacy - not just for students, but for teachers and parents. The study points to China and Estonia as models where AI education is integrated into curricula rather than treated as an afterthought.

Protect calls for safeguards against manipulative engagement design, privacy violations, and the sycophantic algorithms that tell children what they want to hear. The study advocates for government-backed “co-design hubs” like those in the Netherlands, where educators, technologists, and policymakers collaborate on implementation.

What Teachers Are Asking For

In February 2026, educators testified to Congress requesting federal “guardrails and guidance” on AI use in classrooms. Without national standards, teachers rely on a “grab bag of advice” that varies wildly by district.

The ask is modest: clear guidelines on when and how AI should be used, professional development to help teachers adapt, and protections for districts that can’t afford premium AI tools.

The Window Is Closing

The Brookings authors are clear: the trajectory is not yet fixed. The cognitive damage documented is real but fixable. The current risks stem from human choices - how we deploy AI, what we optimize for, what we accept as normal - rather than technological inevitability.

But every year without intervention entrenches habits. Students who learn to outsource thinking in middle school carry those patterns into high school, then college, then careers. The doom loop accelerates.

The Bottom Line

AI is making students’ grades better while making their minds worse. The Brookings study documents cognitive atrophy on a global scale - students who can’t sustain attention, can’t reason independently, can’t remember what they didn’t bother to learn.

The technology has genuine educational benefits: personalized tutoring, accessibility tools, teacher efficiency. The problem isn’t AI itself. It’s AI as a replacement for thinking rather than a tool for thinking better.

The great unwiring is happening now. Whether it becomes permanent depends on choices being made this year, in schools and legislatures and homes where children are learning how - or whether - to think.