HigherEd AI Daily: Feb 18 – Pentagon Declares Anthropic “Supply Chain Risk,” Claude Opus 4.6 Launches, WEF: 92M Jobs Displaced by 2030, Harvard Hosts Microsoft CTO on AI Policy

Hello,
Pentagon vs. Anthropic: "Supply Chain Risk" Threat Escalates
Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk"—a dramatic escalation in the Pentagon's feud with the startup. The conflict centers on Anthropic's refusal to allow its Claude model to be used for military surveillance and weapons systems without explicit ethical review.
If designated a supply chain risk, vendors contracting with the Department of Defense would be required to certify they don't use Anthropic models. This could severely restrict Anthropic's reach into federal and defense-adjacent sectors.
Implication for higher education: If your institution uses Anthropic models and has federal research contracts or partnerships, you may face vendor conflicts. The Pentagon's classification could force institutions to choose between Anthropic and federal funding.
Action item: Review your institution's federal research contracts and current AI vendor agreements. If you're using Claude, document the use case and prepare contingency plans in case of supply chain disruptions.
Claude Opus 4.6: Anthropic's Most Capable Model Yet
Anthropic released Claude Opus 4.6 today, its most powerful model to date. The new version demonstrates major improvements in agentic planning—the ability to break complex tasks into independent subtasks, run tools and sub-agents in parallel, and sustain long-running workflows. It also features enhanced coding capabilities and extended context windows (up to 1M tokens in some configurations).
Early tests show Opus 4.6 can handle sophisticated software engineering tasks, including attempts to build a C compiler (though with mixed results). The model represents a meaningful step toward AI agents that can autonomously manage complex academic and research workflows.
Implication for higher education: As Claude becomes more agentic, institutions must rethink academic integrity policies. If Claude can autonomously manage research workflows, how do faculty distinguish between student learning and AI-mediated completion?
Action item: Evaluate Opus 4.6 in a pilot group of CS and engineering faculty. Ask: How does agentic capability change your assignment design? Where can agentic AI add learning value? Where does it undermine learning?
WEF Report: 92M Jobs Displaced, 170M Created by 2030 (The Net Gain Narrative)
The World Economic Forum's Future of Jobs Report 2025 predicts that by 2030, 92 million jobs will be displaced globally by AI and automation, but 170 million new roles will be created—yielding a net gain of 78 million jobs. The headline is optimistic: "disruption with net job growth."
But the fine print matters: The jobs being displaced are often entry-level, routine, and geographically concentrated (think administrative, customer service, data entry). The 170M new jobs require different skills, different locations, and different training pathways. The transition is brutal for displaced workers.
Implication for higher education: Your graduates are entering a market where the entry-level jobs they were trained for are disappearing. The "net job growth" narrative masks massive displacement in the sectors where young workers typically begin careers. You must teach transition skills, not just domain expertise.
Action item: Disaggregate the WEF data by sector and skill level. Which 92M jobs are being displaced? Which 170M are being created? Where is your curriculum misaligned? Which skills gaps must you fill?
Harvard Hosts Microsoft CTO Kevin Scott on AI Policy (Today, 4pm)
Today at 4pm EST, Harvard's Institute of Politics will host Kevin Scott, Chief Technology Officer at Microsoft, for a discussion titled "Governing the Future: AI Policy and the Road Ahead." Scott will explore the roles of government, industry, and civil society in shaping responsible AI policy—and what's at stake for communities on the wrong side of the digital divide.
This conversation reflects the urgent need for AI governance frameworks. Scott, who has testified before Congress and advised multiple administrations, represents the voice of one of the world's largest AI investors and practitioners.
Implication for higher education: Policy conversations are accelerating. Your institution should be monitoring (and perhaps participating in) AI governance frameworks at federal, state, and local levels. Silence is not neutral.
Action item: Designate an AI policy liaison at your institution. Their job: track legislative and regulatory developments that affect your AI use and partnerships. Brief your provost monthly. Share findings with faculty governance committees.
Marist University Alumni Panel: AI in the Workplace (Today, 11am)
Today at 11am, Marist University is hosting an alumni panel discussion titled "AI in the Workplace" where accomplished graduates will share real stories about how AI is reshaping their careers and advice for current students on preparing for an AI-integrated workforce.
This type of student-facing event is smart career preparation. Students hear directly from alumni about the reality of AI adoption in real companies—not from career services slide decks.
Implication for higher education: Alumni panels are an underutilized channel for bridging the gap between academic training and workplace reality. They also strengthen alumni engagement and help identify curriculum misalignments quickly.
Action item: Schedule an alumni panel at your institution focused on "AI in My Industry." Invite 5-7 graduates across different sectors. Record it. Share findings with faculty senate and curriculum committees.
Try Something New Today
Create a simple "AI Vendor Risk Matrix" for your institution. List all AI tools your campus uses (ChatGPT, Claude, GitHub Copilot, etc.). For each, assess: geopolitical exposure, supply chain fragility, data governance risk, cost stability. Rate each 1-5. Share with your provost. This 30-minute exercise will clarify your institutional dependencies and vulnerabilities.

A Final Reflection for Today: February 18, 2026 marks a critical juncture. The Pentagon threatens Anthropic over ethics. Claude becomes more agentic. The WEF releases net-positive job numbers that mask brutal displacement. Microsoft's CTO speaks about governance at Harvard. And Marist alumni tell students the truth about AI at work. The narrative is clear: policy decisions, vendor conflicts, technical capabilities, and labor market disruptions are converging in real time. Institutions that wait for clarity will be left behind. Leadership means acting now, with imperfect information, on behalf of students and society.

HigherEd AI Daily

Leave a Comment