62% of Students Now Use AI for Homework — and Most Fear the Cost
A new RAND report shows AI homework use jumped to 62% of students by late 2025. More striking: 67% believe it is harming their critical thinking. What should schools do?
62% of Students Now Use AI for Homework — and Most Fear the Cost
Students are using AI for homework more than ever. And a majority of them are worried it's making them worse at thinking. That tension sits at the heart of one of education's most pressing challenges in 2026.
The Numbers Tell a Complicated Story
A RAND Corporation report published in March 2026 tracked AI homework use among middle school, high school, and college students from May to December 2025. The findings are striking:
- AI homework use rose from 48% to 62% in just seven months
- Among middle schoolers specifically, use jumped from 30% to 46%
- High school use reached 63% by December 2025
- 67% of students say using AI for schoolwork harms their critical thinking — up from 54% earlier in 2025
That last figure is the one that should give educators pause. Students are not naive about what is happening when they outsource thinking to a chatbot. They are choosing convenience over confidence in their own skills — and many are aware they are making that trade-off.
What Students Are Actually Doing
The RAND data reveals that students are using AI across a wide range of homework tasks:
- Looking up answers and getting explanations
- Brainstorming ideas before writing
- Revising and editing written work
- Researching topics
- Solving maths problems step-by-step
Some of these uses are relatively benign — AI-assisted brainstorming is not fundamentally different from discussing ideas with a study partner. But others — particularly using AI to generate final written submissions or to solve problems without working through the logic — are much more concerning from a learning standpoint.
The "Mental Muscle" Problem
Educators who study metacognition use a consistent metaphor: thinking is a muscle. When you default to a chatbot before attempting a problem yourself, you skip the productive struggle that builds genuine understanding and cognitive flexibility.
The concern is not just about dishonesty. Even students who are being fully transparent with their teachers about AI use — citing it, disclosing it, acknowledging it — may still be depriving themselves of the effortful processing that makes learning stick.
A survey finding that 67% of students fear this effect in themselves suggests that a significant number of young people have an intuitive grasp of this dynamic, even if they continue the behaviour anyway.
How Schools Are Responding
The policy landscape in 2026 has matured significantly from the panic-and-ban reactions of 2023. Most schools and universities are now moving toward frameworks that:
Require disclosure, not prohibition. Students are expected to cite AI use and describe how they used it, rather than submit AI-generated work without acknowledgement.
Shift to process-based assessment. Journals, step-by-step project logs, drafts with revision histories, and oral components are all harder to outsource to AI than a final essay. More teachers are designing assignments that foreground the thinking process rather than just the output.
Reinstate in-person assessment. For high-stakes evaluations, many institutions are returning to invigilated, handwritten, or viva-style formats where AI assistance is simply not available.
Teach AI literacy explicitly. The most forward-thinking schools are not just managing AI — they are teaching students how to use it well, including how to evaluate AI outputs critically, how to use AI as a starting point rather than a final answer, and when not to use it at all.
What the Research Says About Design
eSchoolNews reported in February 2026 on research challenging the blanket fear around AI in classrooms: when teachers design and guide AI experiences, the technology can actually deepen critical thinking rather than displace it. The key variable is structure. An unstructured "ask ChatGPT" prompt produces passive consumption; a structured "use AI to generate three arguments for a position, then write a rebuttal" prompt requires active, critical engagement.
That distinction — between AI as a shortcut and AI as a thinking scaffold — is the most important pedagogical question schools need to be asking right now.
A Question of Design, Not Just Policy
The RAND findings are a reminder that the problem of AI and critical thinking is not primarily a compliance problem. You cannot policy your way to a generation of strong independent thinkers if the underlying learning experience is still designed around outputs rather than processes.
Schools that will serve their students best in 2026 and beyond are the ones redesigning what learning looks like — with AI as a tool that students learn to use well, not as a threat to be managed.
NeuralClass covers the research, tools, and policy shaping how AI is used in classrooms. Browse our research coverage →