Hello,
1️⃣ Nature: "How AI Slop Is Causing a Crisis in Computer Science"
Nature published a major investigation (Feb 13) documenting how AI-generated low-quality submissions ("AI slop") are flooding preprint repositories and academic conferences, creating a crisis in peer review. Conference organizers report unprecedented volumes of gibberish papers, duplicative research, and fabricated citations—all generated by AI and submitted by researchers seeking publication speed over rigor.
Implication for Higher Ed: Universities are training students who will inherit research ecosystems polluted by AI garbage. Faculty teaching research methods must prepare students to detect and reject AI slop. Institutional review boards (IRBs) face new challenges certifying research quality when AI authorship is opaque.
Action Item (90 min): Convene your research integrity office and graduate program directors. Map current safeguards against AI slop submissions (plagiarism detection, originality verification, citation audits). Draft an "AI-Generated Content in Research" policy clarifying what's allowed vs. prohibited. Distribute to all graduate students and postdocs conducting research.
2️⃣ Jamie Dimon: JPMorgan Plans "Huge Redeployment" as AI Reshapes Workforce
JPMorgan Chase CEO Jamie Dimon announced (Feb 24) that the bank already has "huge redeployment plans" for workers whose roles are disrupted by AI, but noted the plans need to scale up significantly. Dimon signaled that society—not just JPMorgan—must prepare for widespread AI job displacement, calling for broader policy solutions to manage the transition.
Implication for Higher Ed: If JPMorgan (with massive capital and HR resources) is struggling to redeploy workers, universities have even fewer levers. Staff in administrative roles (data entry, scheduling, customer service) will face displacement with little support. The "redeployment" model assumes new jobs exist—they often don't.
Action Item (120 min): Model what JPMorgan redeployment might look like at your institution. For each administrative role facing AI automation, identify 2-3 "destination roles" where displaced workers could move (e.g., AI governance, ethics review, student support). Create a 24-month transition roadmap. Share with HR and affected departments to build psychological safety and confidence.
3️⃣ Anthropic Releases AI Fluency Index: Measuring How Students Learn AI
Anthropic published its AI Fluency Index (Feb 26), analyzing 11 observable behaviors across thousands of Claude.ai conversations to understand how people develop AI competency. The index measures progression from novice (basic prompting) to expert (sophisticated multi-turn reasoning, error detection, refinement loops). Anthropic positions this as a framework for educators to assess student AI literacy.
Implication for Higher Ed: A vendor (Anthropic) is defining what "AI fluency" means for educational institutions. This shifts power away from faculty-designed learning outcomes toward vendor-defined metrics. Universities adopting this index are implicitly endorsing Claude as the standard AI tool and Anthropic's fluency model as institutional truth.
Action Item (90 min): Review Anthropic's AI Fluency Index at your institution. Then convene your faculty to design your own "University AI Literacy Framework" reflecting your values and learning objectives (not Anthropic's). Define what AI competency means for your graduates. Share this framework with students, faculty, and prospective employers to establish institutional authority over AI literacy standards.
4️⃣ Nature: "AI Could Transform Research Assessment" — Peer Review Gets an AI Upgrade
Nature published research (Feb 26) showing that AI can improve research assessment efficiency and cost-effectiveness. A separate study found AI-powered coaching tools can help peer reviewers provide more constructive, less toxic feedback. The findings suggest AI could streamline peer review while improving quality—but academics worry about bias, conflicts of interest, and loss of human judgment in editorial decisions.
Implication for Higher Ed: Universities rely on peer review for tenure decisions, grant decisions, and research credibility. If AI starts making peer review recommendations (or handling editorial decisions), those algorithms will embed institutional biases and vendor agendas. Faculty must remain the decision-makers, with AI as a tool, not a replacement.
Action Item (60 min): Brief your research administrators and provost on AI-powered peer review tools entering the market. Establish a policy: AI may assist reviewers (summarization, tone checking, duplicate detection) but final editorial decisions remain with human faculty. Require transparency about which AI tools are used in research evaluation. Audit current practices for hidden AI decision-making in promotion/tenure cases.
5️⃣ UK × Microsoft "Cats AI in Action" Showcase (Feb 26): Campus Partnership Model in Action
The University of Kentucky is hosting a campuswide AI showcase (today, Feb 26, 10 AM–2 PM) co-branded with Microsoft, featuring demonstrations of AI applications in healthcare, education, and business. The event epitomizes the vendor-university partnership model: free showcase, hands-on learning, "responsible AI" narrative—and deep market penetration.
Implication for Higher Ed: Vendor showcases are the primary mechanism through which campuses introduce AI to faculty and students. These events are marketed events, not neutral educational spaces. Universities hosting them are implicitly endorsing the vendor's AI approach and narrative. Students and faculty absorb vendor messaging as institutional truth.
Action Item (60 min): If your institution plans a similar vendor showcase, require critical voices at the table. Invite faculty skeptics, ethics researchers, and affected staff to co-design the agenda. Ensure equal floor time for vendor enthusiasm AND critical perspectives. Post-event, survey participants: "Did this event feel educational or promotional?" Share results with your board to demonstrate institutional stewardship of vendor influence.
Try Something New Today (20 min):
Ask one graduate student or postdoc: "Have you ever submitted or encountered AI-generated research?" Listen to their story. Then share it (anonymously) with your research integrity officer and one faculty mentor. This frontline intelligence reveals whether AI slop is already polluting your institution's research ecosystem.
Final Reflection (Feb 26, 2026):
Today's headlines reveal the collision of two competing narratives: AI as liberation vs. AI as crisis. Nature documents AI slop destroying research integrity. Dimon admits JPMorgan can't redeploy workers fast enough. Anthropic sells "AI fluency" frameworks. Peer review gets an AI makeover. And universities host vendor showcases as celebration. Higher education is caught in the middle: adopting AI to stay competitive, but losing institutional control over what AI means, how it's used, and who benefits. The question is no longer "Should we use AI?" but "Who gets to decide how we use it?" If universities don't answer that question—with faculty, students, and community at the table—vendors and employers will answer it for them. And the answers won't prioritize human flourishing.