HigherEd AI Daily: Feb 25 – Anthropic Ditches Core Safety Pledge Under Pentagon Pressure, 80% of Students Say AI Improved Grades But Only 20% of Universities Have a Policy, WiseTech Axes 2,000 Jobs: “Manual Coding Is Obsolete”

Hello,
Essential Links

• Anthropic Ditches Core Safety Pledge Under Pentagon Pressure: https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-change
• Coursera Report: 95% Using AI, But Only 25% of Educators Feel Prepared: https://blog.coursera.org/ai-in-higher-education-2026/
1️⃣ Anthropic Capitulates: Ditches Core Safety Pledge Under Pentagon Ultimatum
In a stunning reversal, Anthropic announced (Feb 25) that it is loosening its signature "Responsible Scaling Policy"—the foundational safety commitment that justified its existence as a "safer alternative to OpenAI." The company will no longer automatically pause or delay model development if it could be dangerous. This shift comes on the same day that Defense Secretary Pete Hegseth met with CEO Dario Amodei and threatened to cut ties (and a $200M contract) unless Anthropic accepts Pentagon terms for military AI use.
Implication for Higher Ed: Safety-first rhetoric is now revealed as negotiable. If Anthropic can abandon its core principle under government pressure, what other vendor commitments are hollow? Universities banking on "responsible AI" partnerships with vendors have been warned: vendor principles evaporate when money or power is at stake.
Action Item (90 min): Audit your vendor contracts. For each major AI partnership (Claude, ChatGPT, etc.), identify any "safety" or "ethics" clauses. Map which clauses could be renegotiated or abandoned under pressure. Brief your board on vendor commitment fragility. Consider diversifying vendors precisely because no single vendor's principles are guaranteed.
2️⃣ The Policy Gap: 80% of Students Say AI Improved Grades, But Only 20% of Universities Have a Formal Policy
A major survey (Feb 25) reveals a shocking governance vacuum: 80% of students report AI has improved their academic performance, yet only 20% of universities have a formal AI use policy. Students are moving fast; institutions are stuck. The gap signals institutional paralysis—fear of both banning and endorsing AI has led to inaction, leaving students to navigate ethical ambiguity alone.
Implication for Higher Ed: Absence of policy is not neutrality. Without clear guidance, students make their own rules—which often means maximum AI use without accountability. Faculty stress about cheating rises. Accreditors will soon demand proof that institutions intentionally designed AI use, not just tolerated it by default.
Action Item (120 min): Your institution is in the 80% without formal policy. This is the week to change that. Convene a task force (faculty, students, general counsel) and draft a one-page "AI Use Framework" that: (1) clarifies where AI is encouraged, permitted, or prohibited; (2) explains the reasoning; (3) commits to annual review. Share draft with faculty senate by end of week.
3️⃣ WiseTech's 2,000-Job Massacre: "Manual Coding Is Obsolete," CEO Says
Australian logistics software company WiseTech Global announced (Feb 25) it will eliminate nearly 2,000 jobs—one-third of its global workforce—over two years as AI replaces manual software development. CEO explicitly stated that AI has rendered "manually writing code" obsolete. The company will redeploy some workers to new roles, but the scale of cuts signals that software development jobs—traditionally stable, high-wage work—are now under existential threat.
Implication for Higher Ed: Computer science and IT programs are training students for jobs that may not exist in 5 years. WiseTech's CEO is not speculating—he is declaring that AI-driven development is already displacing human coders. Universities must urgently redesign CS curricula to emphasize what AI cannot do: system design, requirements gathering, testing, maintenance, ethical governance, human-centered problem-solving.
Action Item (120 min): Meet with your CS/IT department chair. Ask: "Which skills are we teaching that AI will commoditize in 5 years?" and "What skills will AI amplify or make more valuable?" Use their answers to redesign your intro CS curriculum. Add courses on AI governance, responsible systems design, and human-centered computing. Share new curriculum roadmap with employer advisory board for validation.
4️⃣ College Board: Faculty Express "Near-Universal Concern" That AI Undermines Writing and Critical Thinking
A new College Board survey (Feb 25) found that faculty across disciplines express near-universal concern that student AI use is eroding original writing and critical thinking skills. Nearly 95% of faculty worry students will over-rely on AI. Yet the same survey shows 80% of students feel AI has helped their learning. This disconnect reveals a fundamental trust rupture between faculty and students about what "learning" means.
Implication for Higher Ed: The faculty-student gap on AI is not about technology—it's about competing definitions of educational value. Faculty see learning as effortful struggle; students see AI as a tool to optimize outcomes. Until institutions explicitly teach why struggle matters, students will rationally choose efficiency over depth.
Action Item (90 min): Hold a faculty-student dialogue (15-20 people from each group). Pose this question: "What would convince you that using AI for this task is not cheating?" Listen for the values underneath. Synthesize findings into a shared "Learning Integrity Framework" that explains which AI uses support learning vs. undermine it. Post on your institution's learning commons website.
5️⃣ Coursera Report: 95% of Students and Educators Using AI, But Only 25% Feel Prepared
Coursera released its first AI in Higher Education Report (Feb 25) surveying 4,200+ students and educators globally. Finding: 95% are now using AI on campus, but only 25% of educators worldwide feel prepared to manage it responsibly. The report also notes that half of U.S. higher education institutions are unprepared to manage AI, yet 78% of U.S. students and educators view AI positively. Adoption is outpacing preparation at a dangerous rate.
Implication for Higher Ed: This is the broadest survey evidence yet that institutions are flying blind. Educators are using AI without confidence; students are using AI without guidance; and institutions lack infrastructure to manage either. This is the moment accreditors will demand accountability.
Action Item (60 min): Download the full Coursera report. Share relevant sections (especially U.S. data) with your provost, faculty senate, and student affairs leaders. Identify which statistics most resonate with your campus. Use them as justification to launch an "AI Readiness Task Force" with a 90-day mandate to assess preparedness and draft an institutional AI governance framework.
Try Something New Today (20 min):

Find one student who uses AI regularly for coursework. Ask: "What's one assignment or skill you're certain AI can't replicate?" Listen closely. Their answer reveals what your institution uniquely offers. Share their insight (anonymously) with your provost this week.
Final Reflection (Feb 25, 2026):

Today's narrative is one of capitulation and chaos. Anthropic, founded to be the "safer" AI company, just abandoned safety when profits and power called. Students are racing ahead with AI while universities lag behind in policy. Entire job categories (software developers) are being declared "obsolete" in real-time. Faculty are alarmed; students are optimistic; educators feel unprepared. And the Coursera report confirms what we already suspected: 95% adoption with 25% preparedness is a recipe for institutional failure. This is the moment higher education must choose: lead with intentional AI governance, or follow chaos into irrelevance.

HigherEd AI Daily
askthephd@higheredai.dev

Leave a Comment