Two years after ChatGPT entered classrooms, the verdict is in: a sweeping Brookings Institution study covering 50 countries concludes that AI’s risks in education currently outweigh its benefits. Meanwhile, 64% of American teens now use AI chatbots, teachers are sounding alarms about declining reasoning skills, and the detection arms race has devolved into an absurd cycle where students use AI to hide their AI use.
Here’s what the research actually shows about AI in schools - the good, the bad, and the deeply concerning.
The Usage Numbers Are Staggering
The Pew Research Center’s February 2026 survey of 1,458 U.S. teens paints a clear picture of adoption:
- 64% of teens now use AI chatbots
- 30% use them daily
- 54% use them for schoolwork
- 57% use them to search for information
The kicker: half of parents say their teen uses chatbots, but 64% of teens report using them. Either parents are unaware, or teens are hiding it.
Usage varies by demographics. About six in ten Black and Hispanic teens use chatbots for schoolwork, compared to roughly half of White teens. And Black teens are significantly more likely to report chatbots being “extremely or very helpful” - suggesting AI might be filling gaps in educational support that schools aren’t providing.
What’s Actually Failing: The Cognitive Impact
The headline from the Brookings study isn’t about cheating. It’s about thinking.
Teachers interviewed for the study describe what they’re seeing in stark terms: “Students can’t reason. They can’t think. They can’t solve problems.”
The Brookings report identifies a “doom loop” of AI dependence. Students offload thinking to AI, their cognitive muscles atrophy from disuse, they become more dependent on AI for basic tasks. Rinse, repeat.
Specific concerns cited in the research:
- Declining reading and writing skills - the “twin pillars of deep thinking”
- Digitally induced amnesia - students can’t recall information they submitted because they never committed it to memory
- Loss of cognitive patience - the ability to sustain attention on complex ideas, eroded by AI’s instant summarization
The report calls AI’s “frictionless” nature its most dangerous feature for education. The struggle to synthesize multiple papers, to work through a complex math problem, to wrestle with difficult ideas - that’s where learning happens. Remove the struggle, remove the learning.
“AI is the fast food of education,” the report warns. “Convenient and satisfying in the moment, but cognitively hollow over the long term.”
The Detection Arms Race Has Gotten Ridiculous
Here’s where things get absurd.
According to a Center for Democracy and Technology survey, 68% of middle and high school teachers now use AI detection tools - up substantially from the previous year. In the 2023-24 school year, 63% of teachers reported students for AI use in schoolwork, up from 48% the year before.
The problem: the detectors don’t work reliably.
A 2026 analysis found a mean false positive rate of 61.3% for essays written by Chinese students, compared to 5.1% for essays by U.S. students. Non-native English speakers and neurodivergent students are being disproportionately flagged for “AI use” when their writing is entirely their own.
One widely reported case: 17-year-old Ailsa Ostovitz was accused of academic misconduct after an AI detector gave her original work a 30.76% probability score. The teacher eventually acknowledged the software’s error - but only after significant distress.
Multiple students have filed lawsuits against universities over false accusations. Research documents cases where false positives led to academic withdrawal and mental health crises.
Meanwhile, a cottage industry of “AI humanizer” tools has emerged. NBC News reports that students are now using AI to rewrite their AI-generated content to evade detection. Tools like BypassGPT and Undetectable AI market specifically to students who want to make AI text “indistinguishable from human writing.”
So we have: students using AI to cheat, teachers using AI to detect cheating, and students using more AI to evade the detection. Nobody is winning this arms race except the tool vendors.
What’s Actually Working
It’s not all doom. The Brookings report identifies real benefits, though they come with caveats.
Reaching excluded students: One of the strongest arguments for AI in education is its ability to reach children locked out of traditional schooling. The report highlights programs in Afghanistan where AI helps deliver education to girls and women banned from formal schools by the Taliban - digitizing curriculum and delivering lessons via WhatsApp in Dari, Pashto, and English.
AI tutoring shows promise: Khan Academy’s Khanmigo grew from 68,000 users in 2023-24 to over 1.4 million by mid-2025, expanding from 45 to more than 380 district partners. Prior research shows students who used Khan Academy for 30 minutes of additional math practice per week saw greater-than-expected gains on standardized assessments.
But there’s a catch: research also indicates that over-reliance on AI during practice can reduce performance on exams taken without assistance. The biggest challenge with Khanmigo, Khan Academy leaders acknowledge, is the same as all educational technology: achieving meaningful student engagement.
Teacher support: A University of Michigan survey found 78% of teachers say generative AI can help with classroom challenges - managing data, building lesson materials, grading, differentiating lessons. But 85% also expressed concerns about student use. Teachers see the tool’s potential for themselves while worrying about its effects on their students.
What Students Actually Think
Students aren’t naive about AI’s downsides. The Pew survey found that 59% of teens believe AI cheating happens “at least somewhat often” at their school, with a third saying it happens “extremely or very often.”
Yet most don’t see themselves as part of the problem. Only one in ten teens say they do “all or most” of their schoolwork with chatbot help. Larger shares say “some” (21%) or “a little” (23%).
As for AI courses themselves: according to an Honorlock survey of roughly 1,000 college students, only 31% are even aware their school offers AI courses, and fewer than 20% have taken them. More than 56% are required to use AI in coursework, and 63% use it for assignments - but mostly for low-level tasks like editing, brainstorming, and explaining concepts.
There’s also a career impact: 14% of students surveyed said they’re no longer considering computer science, partly because of AI. At the same time, 65% say AI tools are “essential for success.” Students are trying to navigate between fear of being replaced and pressure to adopt the tools everyone else is using.
The Bottom Line
The Brookings report offers a useful reframe. The problem isn’t AI itself - it’s how schools are using it, and what happens when schools don’t adapt.
Their core recommendation: make schooling less focused on “transactional task completion” and grades, more focused on fostering curiosity. Students will be less inclined to outsource their thinking if they’re actually engaged by the work itself.
That’s easier said than done. But it’s a more honest diagnosis than either the techno-optimist (“AI will revolutionize education!”) or techno-pessimist (“ban it all!”) camps offer.
For now, the data suggests we’re in a messy middle: AI is here, most students are using it, many are using it poorly, detection doesn’t work, and nobody has figured out how to make it help rather than harm cognitive development. The schools that figure this out first will have done something genuinely valuable. The rest will keep playing whack-a-mole with humanizer tools while their students forget how to think.