Unfixed Newsletter — April 2–15, 2026
Editors’ note: These two weeks are about systems straining under real use. Leaks, failing safeguards, and uneven adoption are exposing how unstable the current moment is. For faculty, IT staff, and students this has been an unsettling run. There are some silver linings as we start to see adaptation at scale.
1) The Claude leak turns AI systems into reusable exploits
A full system prompt and tooling setup for Anthropic’s coding agent circulated publicly, and variations are already being adapted across tools.
Why it matters: We have long speculated about the first big security breach for one of these systems and it has arrived. This episode highlights some of the growing pains of start-ups with such a high degree of exposure and is sure to give IT leaders pause as they consider partnerships. There is also a classroom implication. We previously covered Einstein.ai, the agent that could (maybe) do course work on behalf of a student. This leak makes customization of coding and agent tools built on Claude even more powerful.
2) CSU data shows AI use is already the baseline
A new CSU system survey, alongside LA Times reporting, points to widespread, routine student use of AI tools with uneven faculty guidance. Headline numbers include 95% of faculty, staff, and students having experimented with the systems and 80% of students not feeling comfortable submitting AI work as their own.
Why it matters: The gap is now structural: students are using AI regardless of policy clarity. The practical move is designing assignments where thinking is visible and process is required, rather than trying to control tool use directly. For faculty who want to push back on AI usage in higher education the task has become even more daunting.
Link: https://www.latimes.com/california/story/2026-04-01/csu-ai-survey-students-faculty
3) Faculty are shifting from detection to redesign
Recent reporting and institutional updates show that even improved detection approaches are unreliable under normal student workflows. An ever-increasing volume of faculty are buying into the shift to design tools instead of detection.
Why it matters: Students aren’t submitting raw AI output. Instead they are editing and combining it. Detection becomes both ineffective and risky. Unreliable detection techniques and technologies let savvy students off the hook and only punish students whose usage is still nascent. The upside is that this is pushing a shift toward more defensible assessment: drafts, checkpoints, and work that makes thinking visible rather than inferred.
4) Multimodal AI is now “good enough” for coursework
Newer tools can generate coherent combinations of text, visuals, audio, and video—usable for presentations and assignments with minimal effort. Early image, video, and speech generation left a lot to be desired, but the changes here have come rapidly.
Why it matters: Production quality is no longer a proxy for learning. This destabilizes assignments that rely on polish as evidence of effort, while also raising new issues around fabricated visuals and synthetic media in student work. At the same time, it may lower barriers for some students to express ideas in formats that were previously inaccessible.
Link: https://www.theverge.com/2026/04/08/ai-multimodal-tools-education
5) AI is starting to reshape research workflows
Google Research introduced agents designed to help generate figures and assist with peer review, targeting routine parts of the academic workflow.
Why it matters: This is one of the clearer near-term benefits: AI is compressing the time spent on research labor (visualization, formatting, early review), not generating discoveries. For faculty, that’s useful, but it also raises questions about accuracy, authorship, and what counts as scholarly contribution. Advancements in science have long been promised as payoff for the disruption and cost of AI. While large scale AI generated breakthroughs may still be in the future, incremental change and work improvements are here today.
From Our Work
One Year Into the AI-Powered Universityhttps://www.meltsintoair.org/chatgpt/one-year-ai-powered-univesrity
AI Grief and Instructional Design (Unfixed Podcast)https://www.meltsintoair.org/unfixedpodcast/ai-grief-instructional-design
Editors’ post-script
We’re watching emerging claims about Claude’s new model “mythos” and a new OpenAI model. Both have been slow-rolled for stated fears about the power of the models posing risks to cyber security. Right now, this looks more like narrative than evidence. No higher ed institutions appear to have advanced access, and there’s little concrete to act on. Worth tracking, but not yet something faculty need to respond to.