Unfixed Newsletter — February 15–March 2, 2026
Unfixed Newsletter—The AI Higher Ed Breakdown
From the hosts of Unfixed and authors of Melts Into Air: five AI stories that actually matter for higher education, every two weeks.
Editors’ note
This was the period when “AI in a chat box” kept turning into AI that acts inside systems. From the LMS, computer file systems, office suites, and government contracts everything got integrated. That shift matters because it breaks a lot of our default assumptions about authorship, effort, and verification. Meanwhile, job-panic narratives are spilling directly into student and faculty anxiety.
The Stories
1. Einstein AI + Canvas: the LMS-as-a-course model meets the “agent” era
A tool marketed itself as an autonomous “student stand-in” for Canvas, triggering immediate faculty panic and then rapid repositioning in public messaging. The tool was eventually taken off the Companion.ai site. The takeaway isn’t Einstein; it’s how easy it is to build agents that operate via standard web sessions (outside the LMS’s normal detection model).
Why it matters: if a course can be completed entirely inside the LMS, assume a capable agent can complete big chunks of it. This is a sort of end-game for many online programs and something all of us with online course components will have to navigate.
Links:
2. Turnitin: “80%+ AI” submissions rise and the pivot is toward transparency tools
Turnitin released new data showing a jump in submissions flagged as heavily AI-generated (e.g., 80%+ AI), alongside a push toward “process visibility” products (drafting + responsible-use guidance) rather than pure detection theater.
Why it matters: this nudges integrity toward observable workflows (drafts, checkpoints, rationale memos) and away from after-the-fact verdicts based on detectors. Put another way, these tools have been bad since they were released and companies are still trying to figure out what role they are supposed to play in this space.Link:
3. “Something big is happening” goes viral—and the job panic spills into higher ed
The “something big is happening” essay that speculatively mapped out an impending wave of job losses and hiring freezes due to agentic AI stayed viral during this window. We published a commentary on its links to higher ed last month. Since then, Citrini Research’s speculative macro scenario “2028 Global Intelligence Crisis” rattled markets and poured gasoline on layoff fears. This is now a persistent strand in the news cycle with a mix of CEOs using AI as cover for layoffs and seemingly actual AI impacts.
Why it matters: regardless of accuracy, these narratives are driving real student anxiety, parent expectations, and campus planning pressure (majors, advising, budgets) and they coincide with vendors shipping more workflow automation.Links:
4. Wild card: the labs vs. the “Department of War”
Anthropic refused Pentagon pressure to loosen safeguards (especially around surveillance/autonomous weapons), then OpenAI announced a Pentagon deal sparking backlash and a new round of “red lines” debates.
Why it matters: universities sit downstream (research ties, procurement, norms). The core questions belong in AI literacy and governance: are we asking people to normalize tools whose highest-stakes customers are pushing to remove safety constraints and potentially embed abuse into the structure? Are we asking the labs we partner with for research to serve as the ethical backstop for US defense policy? Or are we partnering with companies developing tools that far outpace the capability of nation-states or democratic oversight? It is a mess.
Links:
https://www.theguardian.com/technology/2026/feb/28/openai-us-military-anthropic
https://www.axios.com/2026/03/01/openai-pentagon-anthropic-safety
https://daringfireball.net/linked/2026/03/02/anthropic-and-alignment
5. OpenAI Enterprise/Edu updates: Projects get a “Sources” home, data residency and expands for Drive/GitHub sync
OpenAI shipped a set of quiet-but-consequential Enterprise/Edu changes (especially for those of us in the CSU and in other spots who have access). Projects now have separate “Chats” and “Sources” tabs, with project files moved into Sources (and sortable), making Projects feel more like a persistent workspace than a folder of conversations.
At the admin/compliance layer, Google Drive and GitHub apps with sync now support in‑region data residency across all supported residency regions (previously US-only), including both user-auth and workspace-auth sync.
Why it matters for higher ed (faculty-facing):These are the kinds of “release note” shifts that change day-to-day reality: what counts as a course/research knowledge base inside ChatGPT (Projects + Sources), what’s permissible to connect (Drive/GitHub) under institutional data rules, and how stable your classroom guidance is when models roll off the menu. These also seem to be the kind of updates that are the products of requests from users which makes us think we should be more vocal about asking for what we want from inside.
Links: