Short on Time? Essential Links
How Teens Use and View AI (Pew Research Center)
Daily AI Briefing for Educators
HigherEd AI Daily
Saturday, February 28, 2026
Today's thread connects the generation arriving at your door with the infrastructure governing the tools they use. A major Pew study reveals what incoming college students already believe and practice around AI and academic work; a landmark governance moment shows how AI companies navigate military and government pressure on safety safeguards; and a powerful new image generation tool gives educators fresh creative options at accessible cost.
These developments converge on a single question: what does responsible AI use look like, and who defines it?
Your Next Students Already Have a Default Position on AI and Cheating
A new Pew Research Center survey of 1,453 U.S. teens ages 13 to 17, released February 24, found that 59% believe students at their school cheat with AI chatbots "very often" or "somewhat often." Fifty-four percent say they use AI tools for schoolwork. Two-thirds have used AI chatbots overall. Only 14% say cheating with AI rarely or never happens at their school.
The implications for college faculty are direct. These students are not arriving uncertain about AI; they are arriving with established habits, social norms around its use, and a working assumption that AI-assisted work is commonplace. Course policies, assignment design, and conversations about academic integrity will need to begin with this context, not against it. Designing for learning in an AI-saturated environment is no longer a future challenge; it is the present one.
Pulled from: Pew Research Center; The Rundown AI; TLDR AI
Anthropic Refuses Pentagon Safeguard Removal; Trump Bans the Company from Federal Use
In one of the most significant AI governance moments of 2026, Anthropic CEO Dario Amodei published a public statement this week rejecting the Pentagon's "final offer" to remove safeguards from Claude for military use. The Department of War had sought unrestricted access to Claude for mass surveillance and weapons targeting; Anthropic declined, stating it had "virtually no progress" in negotiations and could not comply without compromising core safety commitments. In response, President Trump ordered all federal agencies to phase out Anthropic products within six months.
For educators and campus administrators, this episode is a case study in what AI governance actually looks like under institutional pressure. Anthropic chose to forgo substantial government revenue rather than remove the safeguards it considers non-negotiable. The contrast with OpenAI's decision surfaces a real question for institutions selecting AI vendors: what commitments do these companies hold firm on, and what do they yield? These are now procurement and curriculum design questions as much as they are policy questions.
Pulled from: Anthropic; AI Fire; The Rundown AI; TLDR AI; NPR
Google's Nano Banana 2 Claims the Top Spot at Half the Cost
Google launched Nano Banana 2 (technically Gemini 3.1 Flash Image) on February 26, and it immediately ranked first on the Artificial Analysis Image Arena and LM Arena text-to-image leaderboards. The model generates images at 4K resolution with strong subject consistency and direct editing capabilities, and it costs approximately seven cents per image—roughly half the price of its predecessor. Free-tier Gemini users have access to a limited number of daily generations; paid tiers receive full access.
For faculty and instructional designers, this is a practical development worth testing this week. High-quality, free-accessible image generation expands what educators can create for course materials, presentations, and learning artifacts without specialized design skills or budget. The model also supports conversational editing, meaning you can refine an image through natural language rather than re-prompting from scratch.
Pulled from: AI Fire; The Rundown AI; TLDR AI; Google Blog
The $770 Billion Infrastructure Bet and What It Means for Your Classroom
According to projections reported in TLDR AI this week, the five largest hyperscalers (Alphabet, Amazon, Meta, Microsoft, and Oracle) are expected to spend a combined $770 billion on capital expenditures in 2026, representing a 70% annual growth rate since GPT-4 launched in 2023. This level of investment is not an abstraction for educators; it is the infrastructure that determines which AI tools are available, how capable they become, and at what price point institutions can access them.
The practical implication for higher education is that AI capabilities are not plateauing; they are on an aggressive upward curve funded by hundreds of billions of dollars annually. Planning assumptions that treat current AI tools as representative of the ceiling are likely to be incorrect. Faculty development, curriculum design, and academic integrity frameworks built today should be designed with enough flexibility to adapt as capabilities accelerate significantly over the next 18 to 24 months.
Pulled from: TLDR AI
Try something new today
Try something new today; ChatPal; a conversation-first language learning app that helps learners practice speaking through real-world scenarios and receive personalized feedback. Relevant for foreign language faculty, ESL instructors, and students building communication skills in a second language.
Pulled from: AI Fire
A Final Reflection for Today
It's a great day to try something new!
HigherEd AI Daily
Curated by Dr. Ali Green
Sources: Pew Research Center; Anthropic; NPR; Google; AI Fire; The Rundown AI; TLDR AI