Good morning, educators. Today's briefing confronts a troubling paradox: Teachers are reporting that students cannot reason, think critically, or solve problems—yet surveys show 74% of people now use AI primarily to learn. Federal regulation remains virtually nonexistent while districts make ad-hoc decisions about ChatGPT. Meanwhile, a growing body of research reveals that our traditional illusions of academic rigor may have been masking the real issues all along. The question facing your institution today is stark: Are you ready to rethink what "rigor" actually means in an AI world?
The Crisis: "Students Can't Reason. They Can't Think. They Can't Solve Problems."
That quote comes directly from a teacher interviewed for the Brookings Institution's sweeping new report released Wednesday. The yearlong "premortem" by Brookings' Center for Universal Education, drawing on hundreds of interviews, focus groups, and expert consultations, concludes that AI is fundamentally rewiring students' brains in ways that threaten foundational cognitive development.
The Core Problem: Cognitive Debt and the Unwiring of Thinking
The report calls it "cognitive debt"—a phenomenon where students increasingly offload their thinking onto AI, resulting in measurable cognitive atrophy. One student summed it up bluntly: "It's easy. You don't need to (use) your brain." Students are experiencing declines in content knowledge, critical thinking, and creativity. Teachers report what the researchers call "digitally induced amnesia"—students submit AI-generated work but cannot recall the information because they never committed it to memory.
The Frictionless Shortcut: AI as Fast Food
A key insight from the Brookings study: In previous generations, cheating required effort. Today, it requires three steps: log on to ChatGPT, paste your prompt, get the answer. This frictionless process removes the struggle that is actually where learning occurs. As the report puts it, AI acts as "the fast food of education"—convenient and satisfying in the moment, but cognitively hollow over the long term.
Researchers describe a "positive feedback loop": students offload tasks to AI, see grade improvements, become more dependent on the tool, and experience measurable declines in critical thinking. Many students now exist in what researchers call "passenger mode"—physically in school but effectively dropped out of learning, doing the bare minimum necessary.
The Reading Crisis
Reading skills are particularly at risk. The capacity for "cognitive patience"—sustained attention on complex ideas—is diluted by AI's ability to summarize long-form text. Teachers report a disturbing shift in student attitudes: where students once said "I don't like to read," they now say "I can't read, it's too long."
The Emotional Development Threat
Beyond cognition, the report warns of a critical emotional development crisis. Chatbots designed to be sycophantic—always agreeing with users—undermine the relational development that comes from disagreement and recovery. As one expert noted: "We learn empathy not when we are perfectly understood, but when we misunderstand and recover." With teenagers spending up to 100 minutes daily with personalized chatbots, the technology has shifted from tool to companion, simulating friendship without requiring negotiation, patience, or discomfort.
The Regulation Void: Federal Guidance Virtually Nonexistent
While teachers sound alarms about cognitive decline, the federal government has provided virtually no guidance on how schools should use AI. This regulatory vacuum came to light during a House Education and Workforce Committee hearing on Wednesday, where experts testified that the lack of federal standards is forcing districts to make ad-hoc decisions on their own.
The Trump Administration's Hands-Off Approach
The Trump administration has actively pushed for minimal AI regulation to fuel innovation. In December, Trump signed an executive order blocking states from creating their own AI regulations. Meanwhile, the Department of Education has prioritized AI adoption through discretionary grant funding. Yet without federal guardrails, individual districts are left to navigate alone.
Districts Struggling in the Void
According to an Education Week tracker, only two states—Ohio and Tennessee—require school districts to have comprehensive AI policies. As a result, 85% of K-12 teachers and 86% of students used AI at some level during the 2024-25 school year, yet only 50% of teachers reported receiving professional development on how to use AI in their work.
During testimony, Adeel Khan, founder and CEO of Magic School AI, told Congress: "Without clear guardrails and responsibility fragments, districts will struggle to protect students and learn what works." Alexandra Reeve Givens, CEO of the Center for Democracy and Technology, emphasized that professional development is critical—not just how to use tools, but understanding potential risks.
Real-World Impact: Denver Blocks ChatGPT
Denver Public Schools announced this week that it is blocking students' access to ChatGPT on school-issued devices and the district's Wi-Fi network. The district cited concerns about ChatGPT's new features, including 20-person group chats and planned adult content—both potential facilitators of cyberbullying, self-harm, and academic misconduct. Instead, DPS uses Google Gemini, which it says is more compatible with its monitoring systems and data privacy rules.
The Philosophical Reckoning: Exploding Illusions of Rigor
Amid this crisis, a deeper question emerges: Have our traditional measures of academic rigor been illusions all along? An Inside Higher Ed opinion piece by Szymon Machajewski, drawing on Craig E. Nelson's concept of "dysfunctional illusions of rigor," suggests that AI is not eroding rigor—it's exposing where rigor may have been more about appearance than substance.
Five Illusions AI Is Exposing
- Illusion 1: Clear arguments in plain English reveal themselves. Reality: AI can make reasoning visible. Students interacting with AI tools watch arguments get challenged and refined, forcing them to evaluate what to trust. This metacognition is something we rarely teach outright.
- Illusion 2: Writing mastery requires solo struggle. Reality: With AI, students are revising in ways universities rarely supported before. They experiment across multiple iterations. Prompt literacy—getting quality output from AI—is now a hiring priority for 70% of employers.
- Illusion 3: Failure rates signal rigor. Reality: AI tutors now provide adaptive, immediate feedback. If more students are succeeding, that's not grade inflation—it's a sign that scaffolding finally works. Rigor is about reasoning, not recall.
- Illusion 4: Strict deadlines teach discipline. Reality: AI-enabled resubmission protocols boost mastery. Students revise based on detailed feedback while documenting their learning process. Temporal equity matters more than rigid deadlines.
- Illusion 5: Traditional assessments are neutral. Reality: No assessment is truly unbiased. With institutional licenses and inclusive training, AI could level the playing field for students lacking access to mentors and academic coaches.
The deeper insight: Fighting AI "cheating" with increasingly elaborate detection software can feel like an "academic carnival." Meanwhile, we're not asking if our assignments serve any clear purpose. The real work is redesigning courses around genuine learning outcomes, not policing tools.
The Counterpoint: Google's "Our Life with AI" Survey Shows Optimism
Google released its third annual "Our Life with AI" survey, finding something encouraging: For the first time, the top motivation for using AI is learning—not entertainment. Across 21 countries and 21,000 participants, 74% of users report using AI to "learn something new" or "understand a complex topic."
Key Findings
- 85% of students 18+ use AI, primarily to help with schoolwork (83%) and understand complex topics (78%)
- 81% of teachers use AI, far exceeding the global average of 66%
- Teachers report using AI primarily to learn (77% of teacher users) and save time (75%)
- In a six-month pilot in Northern Ireland, teachers saved an average of 10 hours per week with Gemini
- 76% of parents use AI, with 49% using it to explore career changes and new income opportunities
- Most importantly: 68% of students, 73% of teachers, and 77% of parents who use AI perceive a positive impact on how we learn
In emerging markets, 63% believe AI will improve student outcomes through personalized learning, compared to 37% who worry about eroding critical thinking. Even in high-performing nations like South Korea, Japan, and Singapore (where students score 500+ on PISA benchmarks), 63% believe AI will improve outcomes.
The Tension: The Brookings report warns of cognitive decline. Google's survey suggests learners and educators see AI as transformative. Both are true. The difference may lie in intentionality and design.
Innovation Signal: Curriculum-Embedded AI Assessments Launch
EdTech companies continue moving fast. Imagine Learning and Eedi announced the launch of the first curriculum-embedded AI assessments for U.S. math classrooms. The new assessment system will roll out in Imagine Learning's core math curriculum (Imagine IM) for the 2026 back-to-school season, giving teachers immediate, AI-powered insights into student performance.
This is notable: The trend is shifting from standalone AI tools to AI integrated directly into curriculum. Assessment + personalized feedback + curriculum alignment—all in one system. For institutions considering AI strategy, this is the direction edtech is moving.
Five Actions for Your Institution Today
1. Stop waiting for federal guidance—build your own policy now. With only two states requiring comprehensive AI policies, your institution cannot wait. Use Ohio and Tennessee's frameworks as models. Define what responsible AI use means for your context.
2. Move beyond banning AI; design for intentional use. Denver chose to block ChatGPT but embrace Gemini for monitoring and data privacy. The point is not to avoid AI—it's to choose tools aligned with your institutional values and safeguards.
3. Invest in professional development—not just tool training. 50% of teachers received PD on AI in 2024-25, up from 29% in 2023. You need to reach the remaining 50%. But focus on cognitive implications and learning design, not just how-to tutorials.
4. Rethink rigor through the lens of learning outcomes, not procedures. If your assignments depend on procedural gatekeeping (detection software, strict deadlines, memorization), you've built on illusions. Redesign around reasoning, application, and demonstrated mastery.
5. Watch the edtech shift toward curriculum-embedded AI. Tools like Imagine Learning's curriculum-embedded assessments signal where the market is moving. Standalone AI tools are giving way to integrated systems. Plan your procurement accordingly.