HigherEd AI Daily: March 12 — McKinsey Got Hacked, Anthropic Studies Society, and the Public Is Pushing Back on AI

McKinsey's Internal AI Was Breached in Under Two Hours
Security startup CodeWall used an AI agent to break into McKinsey's internal AI platform, known as Lilli, in less than two hours. The breach exposed 46.5 million messages and 728,000 client files through an unauthenticated API vulnerability. McKinsey confirmed the incident and said the flaw has since been patched.
This is not a story about one company's mistake. It is a preview of how quickly AI-assisted attacks can move against institutional AI platforms. The same kind of system McKinsey runs internally is increasingly the kind of system that universities are building for research, advising, and administrative operations.
Why it matters for campuses. Any institution deploying an internal AI platform that handles student records, research data, or personnel files needs to treat security testing as a regular practice, not a one-time setup step. This incident is a clear case study for IT governance conversations and for courses in cybersecurity, information systems, and data ethics.
The Anthropic Institute Opens to Study AI and Society
Anthropic announced the launch of the Anthropic Institute, a new initiative led by Jack Clark to study the legal, economic, and societal consequences of AI development. The institute will conduct and publish original research on questions that go beyond model benchmarks, including how AI changes labor markets, how institutions should respond to rapid capability gains, and what governance frameworks can keep pace with the technology.
Jack Clark is a co-founder of Anthropic and one of the most recognized voices in AI policy. The institute's formation signals that the leading AI labs are now investing in the kinds of questions that have traditionally lived inside universities and think tanks.
Why it matters for campuses. Faculty in sociology, political science, economics, law, and education have a natural partner in this kind of research. The Anthropic Institute's publications will be a credible citation source for syllabi, grant applications, and institutional AI policy work. Follow its output at anthropic.com.
Why So Many People Distrust AI Right Now
AI usage has reached 45 billion monthly sessions worldwide and accounts for more than half of all global search traffic. ChatGPT alone holds 89 percent of global AI sessions. And yet public hostility toward AI tools is rising, not falling. Analysis from the TLDR Marketing newsletter identifies three drivers: deep distrust of technology companies rooted in years of data misuse, economic anxiety about job loss in knowledge work, and cultural resistance from creative professionals who feel their skills are being automated without credit or consent.
The scale and the resistance are both growing simultaneously. That tension is not a contradiction. It reflects the fact that AI is useful and threatening at the same time, and most people do not have a framework for holding both truths at once.
Why it matters for campuses. Educators who have tried to introduce AI tools in class and met resistance from students are not facing a technology problem. They are facing a trust problem. Understanding the roots of that distrust makes it possible to address it directly rather than dismissing skeptical students as uninformed. This is a conversation worth having openly.
Replit Agent 4 Lets Developers Build Front and Back End at the Same Time
Replit launched Agent 4, introducing an infinite design canvas and the ability to run parallel AI agents that build a frontend and backend simultaneously. Instead of building one component at a time, developers can now describe an entire product and have multiple agents work on different parts in parallel, communicating context as they go. Cursor is simultaneously in talks for a $50 billion valuation, signaling how significant the AI coding market has become.
Tools like Replit are increasingly accessible to students with no formal coding background. The barrier to building a functional web application has dropped to the level of a clear description and a free account.
Why it matters for campuses. Computer science departments are not the only programs affected by this shift. Business, communications, education, and social work programs can now assign projects that require students to build functional tools without requiring them to learn syntax. The definition of digital literacy on campus is expanding faster than most curricula can track.
Tool of the Day: Perplexity Personal Computer
Perplexity Personal Computer is an always-on AI agent that runs on a Mac mini and manages your local files, applications, and work sessions on your behalf. You describe what you need done and the agent navigates your computer to complete it, including opening documents, organizing files, and moving information between apps.
For faculty managing large volumes of student work, research files, or course materials across multiple folders, an agent that operates your computer rather than just answering questions represents a meaningful shift in what AI assistance can mean in daily academic work.
Status: Available to Perplexity Pro subscribers. Source: The Rundown AI, March 12, 2026.
The people who distrust AI are not wrong to feel what they feel. They are responding to real patterns in how technology has been deployed against them before. Our job as educators is not to sell AI. It is to help people think clearly about it. That starts with taking their skepticism seriously.
Warmly,
Dr. Ali Green
Founder, Ask The PhD Community
askthephd.com
Not sure where you stand with AI in your teaching practice?
Take the 90-Second AI Readiness Assessment

Start the Assessment

Sources
The Rundown AI Newsletter, March 12, 2026
TLDR AI Newsletter, March 12, 2026
TLDR Marketing Newsletter, March 12, 2026
TLDR IT Newsletter, March 12, 2026
Ask The PhD Community. Empowering 1 Million Educators, One AI Tool at a Time.

Leave a Comment