Hello,
Short on Time? Essential Links
Sam Altman: "Even My CEO Job Isn't Safe from AI"
Speaking at India's AI Impact Summit, OpenAI CEO Sam Altman made a striking admission: he believes AI will soon be able to perform his job better than he can. "Certainly me," he said when asked if AI would replace even senior executive roles.
Altman also confirmed what Fortune reported last week: many companies are "AI washing" their layoffs—blaming AI for cost-cutting decisions they would have made anyway. He acknowledged this is real, though he couldn't estimate what percentage of 2025's 1.2M job cuts fell into this category.
Implication for higher education: If the CEO role isn't safe, nothing is. Your graduates need to internalize that continuous reskilling is now a lifetime necessity, not a career-transition event. This fundamentally changes how you counsel students about career stability.
Action item: Brief your career services team. The "find a stable job and keep it for 30 years" narrative is obsolete. Teach students how to build adaptability, learning agility, and professional networks that survive industry disruption.
The Hand-Holding Snub: Altman & Amodei's Rivalry Goes Theatrical
Prime Minister Narendra Modi orchestrated a symbolic photo op at India's AI Impact Summit, asking 13 business leaders to join him onstage holding hands as a gesture of unified commitment to responsible AI. Everyone complied—except Sam Altman and Dario Amodei, who stood next to each other but kept their hands conspicuously apart.
The moment went viral. It was a live, public display of the intensifying OpenAI-Anthropic feud—fueled by Pentagon threats against Anthropic, competition toward 2026 IPOs, and fundamental disagreements over AI safety vs. speed. The snub signals how personal and institutional this rivalry has become.
Implication for higher education: When AI vendors are in public conflict, institutional customers face increased risk. If your campus depends on both OpenAI and Anthropic, you may be forced to choose sides—or face supply chain disruption.
Action item: Review your AI vendor portfolio again. If you're using both OpenAI and Anthropic heavily, identify which use cases could be migrated quickly. Vendor conflict is now a material business risk.
Inside Higher Ed: "AI Will Break Assessment Before It Fixes It"
An Inside Higher Ed opinion piece argues that generative AI has broken the "artifact economy"—the assumption that essays, problem sets, exams, and papers reliably demonstrate learning. AI has shattered that premise. Students can now generate credible work without learning the material.
The author contends that institutions must now fundamentally rethink how they assess learning. Old methods (essays, take-home exams, portfolios) no longer work. But institutions haven't yet designed what comes next. The result: assessment is breaking faster than fixes are emerging.
Implication for higher education: Your accreditors, employers, and peers are noticing that institutional credentials have become unreliable. This erodes trust in your degree. The crisis is not individual cheating—it's systemic loss of confidence in what a degree represents.
Action item: Convene your assessment committee, faculty senate, and academic deans. Discuss: What does mastery look like in an AI-enabled world? How do we assess learning that AI can't generate? Draft a 12-month roadmap for assessment redesign.
Harvard Student Op-Ed: "I Won't Let AI Dominate My Education"
A Harvard student published an op-ed in The Boston Globe arguing that some courses—precisely because they're "useless" in a vocational sense—are the most valuable ones. Liberal arts, philosophy, humanities courses teach thinking and judgment, not just job skills. AI threatens to hollow out this value by making everything seem immediately applicable.
The student asserts that he and his peers are pushed toward "AI-adjacent" courses and majors because employers prioritize AI literacy. But what gets lost is the cultivation of wisdom, ethical reasoning, and the ability to ask important questions—capacities that can't be automated.
Implication for higher education: Students are beginning to articulate the difference between training and education. They sense that "usefulness" is being conflated with "value." Your institution's competitive advantage may lie in protecting humanistic learning, not in racing to be "AI-ready."
Action item: Create a communications campaign about the enduring value of liberal arts and humanities in an AI era. This message resonates with parents, students, and employers tired of the hype cycle. Position your institution as a defender of real education, not just workforce prep.
Nebraska AI Institute: Systemic Approach to AI Integration
The University of Nebraska launched a systemwide AI Institute spanning all four campuses, coordinating research, teaching, workforce development, and public engagement. The institute focuses on ethical AI innovation with particular attention to agriculture, rural health, and economic development—sectors critical to Nebraska.
This is an example of institutional strategy done right: rather than scattered faculty initiatives or a top-down mandate, Nebraska created a coordinated ecosystem that connects research, curriculum, community engagement, and workforce development. It signals serious commitment to both AI advancement and responsible deployment.
Implication for higher education: One-off programs and course additions won't work. Institutions need systemic approaches—coordinated across colleges, connected to research and workforce development, and rooted in regional economic needs.
Action item: If you haven't yet, establish a university-wide AI coordination body. Meet monthly. Ensure it includes research, curriculum, student services, career services, IT, and governance. This prevents silos and ensures alignment across campus.
Try Something New Today
Ask three faculty members this question: "If AI can generate work in your discipline, what's left to teach?" Listen for their answer. Do they describe reskilling students for a different role, or reimagining what learning means? The diversity of answers will show you where you need institutional conversation.
A Final Reflection for Today: February 19, 2026: Sam Altman admits his own job isn't safe. He and Dario Amodei refuse to hold hands on a global stage. Assessment is breaking. A Harvard student defends the "useless" humanities. Nebraska builds a coordinated AI institute. The narrative converges: AI is both opportunity and threat. The institutions that will thrive are those that refuse false choices—they won't sacrifice liberal learning for job prep, won't chase every technology trend, won't pretend assessment still works the old way. They'll instead build institutional clarity about what education is for. That clarity is your competitive advantage now.
HigherEd AI Daily