HigherEd AI Daily: Feb 27 – Pentagon Gives Anthropic Until 5pm Today: Comply or Lose DoD Contracts, 330 Google and OpenAI Employees Sign “We Will Not Be Divided” Petition, Georgetown Partners with Google Gemini, Inside Higher Ed: “Robot Overlords Are Red Herrings,” Walmart CPO Warns U.S. Falling Behind China on AI Education

Friday, February 27, 2026
HigherEd AI Daily
Published by AskThePhD | askthephd@higheredai.dev
Hello,
Today is a day that will be remembered. By 5:01 p.m. ET, Anthropic must decide whether to surrender its AI safety principles to the Pentagon or risk being classified as a national security threat. Meanwhile, 330 Google and OpenAI employees have signed a solidarity petition, Georgetown just became the first major Jesuit university to deploy Google Gemini campus-wide, and a community college dean published the most important higher-ed AI essay of the month. This is what the newsletter was built for.
Vendor Ethics / National Security
Pentagon Gives Anthropic Until 5 p.m. Today: Comply or Lose DoD Contracts
Anthropic has until 5:01 p.m. ET today to agree to allow the Department of Defense to use its AI models for all lawful purposes without restriction. Defense Secretary Pete Hegseth has threatened to label the company a "supply chain risk" — a designation typically reserved for adversarial nations — if it refuses. The company signed a $200 million DoD contract in July 2025 and has refused to remove two safeguards: no use of Claude for fully autonomous weapons, and no use for domestic mass surveillance of Americans.
Anthropic CEO Dario Amodei has held firm: "These threats do not change our position: we cannot in good conscience accede to their request." The Pentagon's response: Hegseth's deputy called Amodei a "liar with a God-complex." The standoff is now the most consequential test of vendor AI ethics in the industry's history.
For campuses with DoD-funded research or Anthropic-powered tools in their tech stacks, the outcome of today's deadline will matter immediately. If Anthropic is blacklisted, every university using Claude may need emergency substitutions.
Action (today): IT leadership should identify every campus system where Anthropic's Claude is embedded. Have a contingency plan ready. Brief the provost's office before end of day regardless of outcome; this deadline sets a precedent for every AI vendor relationship your institution holds.
Industry / Ethics
330 Google and OpenAI Employees Sign "We Will Not Be Divided" Petition
By Friday morning, 266 Google and 65 OpenAI employees had publicly signed an open letter titled "We Will Not Be Divided," expressing solidarity with Anthropic and calling on their own company leaders to refuse the Pentagon's demands. The letter states that the DoD is deliberately trying to "divide each company with fear that the other will give in" and urges leaders to hold shared red lines against domestic mass surveillance and autonomous weapons.
A separate internal letter from more than 100 Google DeepMind employees to chief scientist Jeff Dean made the same case. This is the first coordinated cross-company employee action against government AI policy in the industry's history.
The significance for higher education is direct: these employees are building the AI tools your students and faculty use every day. Their values — and whether their companies honor those values — determine what your campus is endorsing when it signs an enterprise AI contract.
Action (one week): Add an "AI Ethics Alignment" clause to all new and renewed vendor contracts. Require vendors to disclose if they have accepted any government demands to override stated safety policies. This is not hypothetical; it is today's news.
Campus AI / Student Affairs
Georgetown Partners with Google Gemini: "Access Is the First Step"
Georgetown University announced this week that it will deploy Google Gemini for Education to all students, faculty, and staff beginning in March, becoming one of the first major Jesuit universities to make a campus-wide AI partnership. The rollout will also include a new cross-major concentration in Ethics and AI, an AI core requirement in the MBA program, and a MedStar simulation center.
Not everyone agrees. Philosophy professor William Fleisher called the rollout "hasty," questioning whether widespread institutional endorsement is the same as thoughtful integration. English professor Nathan Hensley was more direct: "The university does not have any kind of AI policy." Student Valli Pendyala put it simply: "Efficiency is being prioritized over actually engaging with the material."
Georgetown's rollout captures a tension playing out at every institution: the pressure to deploy AI at scale before the ethical and pedagogical frameworks are in place to support it.
Action (30 days): If your institution is considering or has already made a campus-wide AI deployment, conduct a faculty survey within the next 30 days. Ask: Do you feel prepared to teach with this tool? Do you feel the institution has given you adequate guidance? Share results with the provost and academic senate before semester end.
Higher Ed Leadership / Economics
"Robot Overlords Are Red Herrings": A Community College Dean Answers the Viral AI Crash Memo
A blog post published this week predicted that agentic AI will trigger economic collapse within years — crashing property values, gutting the public sector, and concentrating wealth catastrophically. The post went viral and tanked the stock market. Inside Higher Ed's community college dean responded today with an essay that cuts through the panic to the real issue.
The argument: the robots are not the problem. The political choice about who captures the gains from productivity is the problem. The dean invokes Galbraith, Keynes, and Schor to argue that we solved scarcity decades ago and chose not to share the gains broadly. "Should the fruits of technological advance be hoarded exclusively at the top, or should they be shared broadly? Get that right and the bots may serve a useful purpose. Get that wrong and the viral memo may prove prophetic."
This is the most important framing for higher-ed leaders this week. Universities that teach students only to compete in an AI economy are missing the point. Universities that teach students to ask who benefits — and to shape those choices — are doing their job.
Action (one semester): Assign this essay in your next faculty development session. Then ask: Does your curriculum give students the tools to analyze who captures the gains from AI — not just how to use AI? If not, identify one course in each major where that question can be introduced by Fall 2026.
Workforce / Global Competitiveness
Walmart's Chief People Officer: "Five-Year-Olds in China Are Learning DeepSeek"
Donna Morris, Walmart's Chief People Officer, gave Fortune a stark warning today: China is teaching AI concepts from primary school, requiring a minimum of eight hours of AI instruction per year in Beijing's schools, and nearly one-third of the world's top AI talent was born there. Meanwhile, U.S. employers report that most entry-level candidates are not AI-fluent, and companies like Walmart, Deloitte, and Verizon are now funding large-scale internal AI training.
"AI literacy is the fastest growing skill," according to LinkedIn data, and two-thirds of business leaders say they would not hire someone without AI skills. The question Morris is raising is not about individual companies; it is about national competitiveness. The U.S. is outsourcing AI workforce development to employers — a stopgap, not a system.
This is an open door for higher education. Universities that build AI fluency into general education requirements — not as an elective, not as a CS course, but as a core outcome across all majors — will produce exactly what employers say they cannot find.
Action (one academic year): Propose an "AI Fluency" general education requirement to your curriculum committee this semester. Frame it not as a technology course but as a critical thinking requirement for the AI age. Model it on what employers say they need.
Try Something New Today (15 min)
Read the Inside Higher Ed essay "Robot Overlords and Red Herrings" in full. Then send a one-paragraph reaction to one colleague — a faculty member, a dean, or a department chair — and ask: does your curriculum give students the analytical tools to decide who benefits from AI? Do not send a link. Write the paragraph yourself. That is the point.
Today's newsletter carries one consistent message across five different stories: the rules of the AI era are being written right now, and the people writing them are not waiting for higher education to catch up. Anthropic is drawing a line in the sand against the Pentagon. Google and OpenAI employees are drawing their own. Georgetown is signing its first AI contract. A community college dean is the clearest voice in the room. Walmart is training workers because universities are not. In each of these stories, someone is making a choice. The question for higher-ed leaders is not whether to engage. It is whether to engage before the choices are made for them.
HigherEd AI Daily is published by AskThePhD / higheredai.dev
You are receiving this because you subscribed to HigherEd AI Daily.

Leave a Comment