AI in Schools: Cheating Up 370%, Detection Failing, Teachers Overwhelmed

More than half of students now use AI for homework, AI detection wrongly flags 1 in 10 ESL students, and 76% of teachers have received no training. A look at what's actually happening in classrooms.

Here’s the state of AI in American classrooms as of March 2026: 54% of students use ChatGPT or similar tools for homework. AI-related misconduct cases have grown from 1.6 to 7.5 per 1,000 students - a 370% increase since 2022. And 76% of teachers haven’t received any training on how to handle it.

The gap between what’s happening in classrooms and what schools are prepared to address has become a chasm.

Students Are Using AI. A Lot.

According to RAND’s latest survey data, more than half of students and teachers now use AI for school-related work, representing increases of more than 15 percentage points from just a year or two ago. A Pew Research survey from February found 57% of teens use chatbots to search for information and 54% use them for homework help.

About a quarter of all teens say chatbots have been “extremely or very helpful” for completing schoolwork. Another 25% say “somewhat helpful.” That’s half the student population getting meaningful assistance from AI.

Usage patterns aren’t uniform. Pew’s data shows about 60% of Black teens use AI chatbots for schoolwork, compared to roughly half of white teens. Whether this reflects different access patterns, different educational contexts, or different attitudes toward the technology isn’t clear - and schools aren’t tracking it.

What students themselves acknowledge: 59% of teens think using AI to cheat is “a regular occurrence” at their school.

The Detection Arms Race Is Failing

Schools turned to AI detection tools like Turnitin and GPTZero to stem the tide. The results have been mixed at best, harmful at worst.

Research from Stanford and MIT reveals that non-native English writers are flagged as AI-generated at rates approaching 9.24% - nearly 1 in 10 human-written essays marked as machine-made. Turnitin’s false positive complaints spike at 7%, with higher rates for ESL students.

That’s not a rounding error. That’s a system that systematically punishes students for writing in a second language.

Vanderbilt University disabled Turnitin’s AI detection in August 2023 after calculating that even a 1% false positive rate would result in roughly 750 wrongly flagged papers. They cited “emotional and psychological harm” from false accusations. Northwestern, Yale, Johns Hopkins, UCLA, UC San Diego, and University of Michigan-Dearborn have followed suit.

Meanwhile, faculty rate traditional plagiarism policies as only 49% effective and AI-specific policies even lower at 28%.

The fundamental problem: detection tools need to distinguish between “AI-generated” and “written by someone who writes in a structured, clear, academic style.” Those categories overlap significantly, especially for students who’ve been trained to write in exactly that style.

Teachers Are Being Thrown to the Wolves

Forty-four percent of teachers, principals, and district leaders told EdWeek’s Research Center they hadn’t received any professional development on AI. Another study found 76% report receiving no training whatsoever.

Of those who did get training, 29% said it was one-time professional development, 19% had training more than once, and only 8% had ongoing support. One superintendent described their first AI professional development session as “super overwhelming and it scared me.”

Only 13% of educators say their district has an AI policy that’s been made clear to both students and teachers.

Teachers are being asked to evaluate whether students used AI, enforce policies that often don’t exist, integrate new tools they haven’t been trained on, and do it all while managing the same class sizes and workloads they had before.

The NYC Example

New York City - the nation’s largest school district - exemplifies the policy vacuum. Teachers report seeing a spike in AI-assisted cheating while “anxiously awaiting” guidance from the district.

Schools Chancellor Melissa Aviles-Ramos laid out what Gothamist called a “vague framework” calling for “responsible use” while tighter guidelines are “still being developed.” Teachers are left to make individual calls on what constitutes appropriate use, creating inconsistent enforcement across classrooms and schools.

Some teachers have banned all AI. Others encourage it as a learning tool. Students navigate between classes with contradictory rules, often unclear on what’s allowed where.

The Bend, Oregon Blowup

Not everyone is waiting for schools to figure it out. In Bend, Oregon, parents organized one of the most visible pushbacks against AI in education.

More than 1,100 parents signed a petition requesting the school board “reduce screen time in schools and reevaluate the District’s increasing reliance on Big Tech.” Parent protests dominated a February 10 school board meeting after the district deployed an AI chatbot called “Raina” that some parents worried could cause children to form unhealthy attachments.

The tech company behind Raina quietly removed it from student-facing platforms. The district’s technology leader wasn’t aware of the retirement as he defended the bot during the public outcry.

Parents’ statement cut to a core concern: “The District was unaware of this roll-back further undermines the credibility of those who continue to claim the products given to our students are well vetted and safe.”

The Positive Angle No One Talks About

It’s not all crisis. Teachers report that some students who refuse to participate in class or ask for help are more willing to engage with AI tutors - talking to them, taking risks, making mistakes. Those are essential elements of learning that some students won’t do in front of peers or teachers.

AI tutoring tools can provide personalized feedback at scale that no teacher with 30 students per class could possibly offer. Students who struggle with reading can have concepts explained multiple ways until something clicks. Those are genuine educational gains.

The question isn’t whether AI has value in education. It’s whether schools have the resources, training, and policies to capture that value while managing the risks.

What States Are Doing

Ohio now requires every district to publish an AI plan by July 2026. California legislators are drafting parallel guidance for K-12 districts. Chatbot bills are advancing in Arizona, Iowa, Georgia, Illinois, New York, Oregon, and Washington.

Virginia lawmakers are questioning the technology’s impact on students’ safety, critical thinking, and learning skills as the state adopts AI tools faster than it develops guardrails.

The federal government has recommended banning AI chatbots and companion apps for minors entirely - a proposal that seems increasingly disconnected from a reality where more than half of teenagers are already using them daily.

The Bottom Line

Education is experiencing the collision of three forces: technology that’s already in students’ hands, policies that haven’t caught up, and teachers who’ve been given neither training nor clear guidance.

The current approach - deploy detection tools, hope teachers figure it out, draft policies “later” - isn’t working. Detection is biased against exactly the students who often need the most support. Policies are inconsistent or nonexistent. Teachers are overwhelmed.

What would help: actual professional development funding, clear and consistently enforced policies, acknowledgment that AI isn’t going away, and honest conversations about what education means when information lookup is effectively free.

What we have instead: a 370% increase in cheating cases, 1 in 10 ESL students wrongly accused of fraud, and parents pulling their kids out of districts they no longer trust.

The technology isn’t the problem. The institutional failure to adapt to it is.