HigherEd AI Daily: Jan 13 – 94% of Higher Ed Workers Use AI, But Only 54% Know Their Institution’s Policies

Daily AI Briefing for Educators
HigherEd AI Daily
Monday, January 13, 2026
Good morning, educators. Today's briefing reveals a critical governance gap in higher education: 94 percent of workers use AI tools daily or weekly, but only 54 percent know their institution's policies governing that use. This disconnect signals a fundamental leadership failure—not because policies don't exist, but because they aren't reaching the people who need them most. Meanwhile, teacher adoption of AI has nearly doubled in two years, driven by pragmatic needs and embedded tools. The question is no longer whether higher education will use AI. The question is whether institutions will lead this adoption with clarity or inherit chaos.
Today's Focus: The Governance-Adoption Disconnect
The Core Finding: 94% Use AI, 54% Know the Policies
New research from EDUCAUSE surveyed 1,960 staff, administrators, and faculty across more than 1,800 institutions. The findings are stark: while 94 percent of respondents use AI tools for work, only 54 percent are aware of their institution's AI use policies and guidelines. This is not a minor communication gap. This is a governance crisis.
The implications are profound. When workers don't know institutional policies, they make independent decisions about which tools to use, how to deploy them, and what data to share. Fifty-six percent of respondents reported using AI tools that are not provided by their institutions. Among decision-makers—executives, managers, IT professionals—the awareness gap is even worse. Thirty-eight percent of executive leaders are unaware of their own institution's AI policies.
What This Signals
Institutions have adopted AI across their workforce without establishing governance infrastructure. This creates three immediate risks: data privacy breaches, compliance violations, and shadow systems that cannot be audited or controlled. Leadership has failed to communicate—or in many cases, to establish—the policies that protect institutional assets.
The Enthusiasm-Caution Paradox
Eighty-one percent of respondents expressed at least some enthusiasm about AI. But here's the nuance: only 33 percent were "very enthusiastic/enthusiastic," while 48 percent reported a mix of "caution and enthusiasm." Seventeen percent said they were "very cautious/cautious" about it.
This distribution is telling. The majority of higher education workers are not reckless adopters. They see potential, but they're hedging their bets. They want to use AI, but they're uncertain about the implications. This is the sentiment of people operating without clear guidance.
When asked about institutional leaders' attitudes toward AI, respondents painted a similar picture: 38 percent perceived enthusiasm, 36 percent perceived a mix of caution and enthusiasm, and 15 percent perceived caution. Workers and leaders are not aligned. Workers are cautious; leaders appear mixed. Neither group is confident.
The Risks Are Real and They're Overwhelming
Sixty-seven percent of respondents identified six or more "urgent" AI-related risks. The most commonly cited concerns include:
  • Increase in misinformation and hallucinations
  • Use of data without consent or proper governance
  • Loss of fundamental skills requiring independent thought
  • Student AI use outpacing faculty and staff AI capabilities
  • Job loss and workforce disruption
  • Data privacy and security breaches
The finding that 67 percent identify six or more urgent risks is significant. Workers are not dismissing AI. They're overwhelmed by the scope of risks and the lack of institutional guidance for managing them. This is how institutions create cultures of fear rather than adoption.
Teachers Are Leading the Charge: 61% Now Use AI
While governance lags at the institutional level, teachers are voting with their actions. Teacher AI use nearly doubled from 34 percent in 2023 to 61 percent in 2025, according to EdWeek Research Center data. This is driven by two factors: professional development and embedded tools.
In 2025, 50 percent of teachers reported having at least one professional development session on using AI in their work—nearly double the 29 percent from early 2024. Simultaneously, major edtech companies—Canva, Google, Kahoot!, Khan Academy, Microsoft—have embedded generative AI directly into their platforms. Teachers don't have to seek out AI; it's already in the tools they use daily.
Teachers are also pragmatic. They see AI as a solution to overwhelming workload demands. When you're managing multiple competing demands on your time, the ability to use AI for planning, differentiation, and feedback becomes incredibly valuable. Teachers aren't waiting for perfect policy. They're adopting tools that solve immediate problems.
Four Stages of AI Integration: Where Is Your Institution?
The U.S. Department of Education has outlined a framework for understanding where institutions stand in their AI journey:
Stage 1: Fear – Educators worry machines will replace them, students fear algorithmic bias, administrators fear ethical scandals. Left unaddressed, fear calcifies into avoidance. The solution is active engagement, training, and clarifying that AI will augment, not replace, human educators.
Stage 2: Skill Erosion – When AI is used as a shortcut instead of a tool. Students generate essays without engaging deeply. Faculty lose touch with student learning patterns. The risk is subtle but real: AI hollows out the very skills education is meant to strengthen—critical thinking, reasoning, creativity.
Stage 3: Acceptance (and Stagnation) – AI becomes normalized, like spell-check or calculators. Systems are more efficient, but not transformed. Courses still look the same. Outcomes are measured the same way. Many institutions get stuck here, accepting AI for efficiency gains but missing its transformative potential.
Stage 4: Reinvention – Education is reimagined, not just improved. AI becomes a creative partner. Dynamic scheduling adjusts offerings in real time. Personalized learning adapts pace and content to each student's needs. Career-aligned pathways use predictive analytics to guide student success. Faculty roles evolve toward mentorship and design.
Most of higher education is hovering between fear and acceptance. The real opportunity lies in reinvention—if institutions have the courage to push past stagnation.
The ROI Problem: Only 13% of Institutions Measure It
Institutions are adopting AI widely but measuring its impact barely at all. Just 13 percent of respondents said their institution is measuring return on investment for work-related AI tools. This is a damning statistic. How can leadership justify continued investment in AI if they don't know whether it's working?
Measuring ROI for AI is challenging, and that's likely why institutions skip it. But that avoidance creates a strategic vulnerability. Without ROI data, institutions cannot defend AI investments when budgets tighten or when skeptics demand evidence.
What This Means for Your Institution in 2026
1. Establish AI governance NOW, not after a crisis. Don't wait for a data breach or compliance violation to clarify your AI policies. Communicate policies to every employee. Make it clear which tools are approved, which data can be shared, and what escalation pathways exist for edge cases.
2. Invest in professional development before scaling AI. The data shows that teachers with PD are more confident and effective adopters. Make AI literacy a graduation standard for all staff. Don't assume people will figure it out on their own.
3. Move from acceptance to reinvention. Using AI to save time is good. Using AI to fundamentally reimagine how your institution operates is transformational. Ask not "How can AI make our current systems more efficient?" Ask "What becomes possible with AI that wasn't before?"
4. Measure ROI on key AI initiatives. Pick 2-3 high-impact AI projects. Measure outcomes quarterly. Share results with stakeholders. This builds confidence and defends continued investment.
5. Acknowledge the mix of caution and enthusiasm. Your workforce is not uniformly excited or skeptical about AI. They're thoughtful. They see potential and risks. Lead with transparency about both. That's how you build institutional trust.
A Final Reflection for Today

The disconnect between adoption and governance is not a failure of individuals. It's a failure of leadership. Ninety-four percent of your workers are using AI. They're not waiting for perfect policy. They're solving problems. The question is whether you will help them solve those problems with institutional support or whether you will let them navigate risks alone. Clear policy, professional development, and transparent dialogue about both opportunities and risks—that's how institutions move from the fear-acceptance limbo into reinvention. The time to act is now.

HigherEd AI Daily
Curated by Dr. Ali Green
Sources: EDUCAUSE, U.S. Department of Education, EdWeek Research Center, Inside Higher Ed
Visit AskThePhD.com for governance templates, professional development frameworks, and policy compliance guides.
Leading higher education through intentional AI governance, transparent leadership, and human-centered strategy.

Leave a Comment