A new RAND Corporation survey captures the strange contradiction at the heart of AI in education: students know it’s making them worse at thinking, and they’re using it more than ever.
Between May and December 2025, AI homework use among students jumped from 48% to 62%. Over the same period, the share who believe AI harms their critical thinking rose from 54% to 67%.
They’re not wrong. And they’re not stopping.
The RAND Numbers
The American Youth Panel survey tracked over 1,200 students aged 12-29. The findings:
- 62% now use AI for homework, up from 48% in May
- 67% say using AI harms critical thinking, up from 54%
- 75% of female students report thinking harm, versus 59% of male students
- ChatGPT remains dominant (53%), but Google Gemini use more than doubled to 28%
The growth came primarily from middle and high schoolers. College student usage stayed relatively flat — suggesting the AI habit is getting set earlier.
Most common uses: getting better explanations of assignments (38%), brainstorming ideas (35%), looking up facts (33%), and drafting or revising writing (33%).
Only about a third of students said their school has an actual AI policy. For many, the rules vary by teacher.
NYC: Policy Coming, Parents Demanding Moratorium
New York City schools are finally getting AI rules — maybe.
Chancellor Kamar Samuels told a District 2 town hall on March 5 that guidance would come “in the coming weeks” with a 45-day public comment period. It’s now mid-March with nothing published.
Meanwhile, opposition is growing. Five Community Education Councils have passed resolutions calling for a two-year moratorium on AI in schools. At a recent protest, a middle-school student presented a petition with over 1,300 signatures demanding a pause on AI implementation.
Parents cite concerns about bullying, privacy, and — echoing the RAND findings — erosion of critical thinking. Samuels reportedly told the student he “agreed with the contents of the petition,” though the department continues planning AI rollout.
Complicating things: the department has proposed a new AI-focused high school in Manhattan. Critics question the timing. How do you launch an AI-focused school when you haven’t figured out basic AI guidelines for existing schools?
Utah: Nine Bills, Governor’s Desk
While NYC debates, Utah acts.
The state legislature passed nine AI-related bills in its 2026 session, covering schools, deepfakes, healthcare AI, and age verification. Governor Spencer Cox has 20 days to sign or veto.
Key school provisions in the “Balance Act”:
- Requires every school district to create AI and technology use policies
- State Board of Education must develop a model policy
- Establishes guidelines for “balanced use” of AI in classrooms
Utah also passed a bell-to-bell phone ban — a separate issue, but part of the same legislative push to address technology’s effects on students.
The approach is more framework than prescription. Schools get flexibility on implementation, but they can no longer ignore the question.
Teachers: Using AI, Worried About Students
The disconnect isn’t just among students. Teachers face the same contradiction.
A recent survey found 60% of teachers used AI this year, saving up to six hours per week on tasks like grading, lesson planning, and administrative work.
Yet most remain concerned about student use:
- 70% worry AI weakens critical thinking and research skills
- 57% believe AI decreases students’ independent thinking
- 52% say it decreases critical thinking
Teachers see the productivity benefits for themselves while watching those same tools hollow out student learning. The math doesn’t work: you can’t tell students AI will take their jobs while using it to save time on your own.
Training hasn’t caught up either. Less than a third of teachers received guidance on effective AI use. Fewer than one in five learned how to monitor AI systems. Schools are asking teachers to manage a technology they weren’t taught to understand.
The Central Contradiction
The RAND data captures something important: students aren’t naive. They recognize what’s happening to their thinking. The 13-point jump in critical thinking concerns — from 54% to 67% — suggests growing awareness, not denial.
Yet awareness doesn’t translate to behavior change. Usage grew faster than concern did.
This mirrors every technology adoption pattern we’ve seen: understanding the downsides doesn’t stop use. Students know social media affects their mental health and keep scrolling. They know AI is affecting their thinking and keep prompting.
The policy question is whether schools can create structures that channel AI toward productive uses — explaining concepts, providing feedback, enabling practice — while limiting dependency. Some schools are trying traffic-light systems: green for AI-allowed tasks, yellow for partial use, red for off-limits.
Whether any of this works remains unproven. The detection tools failed. The bans failed. Maybe the frameworks will fare better.
What to Watch
- NYC’s delayed guidance: When it finally drops, the 45-day comment period will reveal how parents actually feel
- Utah’s implementation: Will schools use the flexibility to develop thoughtful policies, or default to the minimum?
- The 67% number: If student concern continues rising while usage also rises, we’re looking at a generation with clear eyes and no discipline — not a great combination for cognitive development
The students have figured out the problem. They just haven’t figured out how to stop.