Unfixed Newsletter — March 3–17, 2026
From the hosts of Unfixed and authors of Melts Into Air, top AI stories for higher education, every two weeks.
The last two weeks did not produce one clean AI story. Instead we got a cluster of signals that the old bargain between writing, assessment, and evidence of learning is breaking down. The pressure is coming from several directions at once. Students are using AI as a normal study tool, public demonstrations that machine prose can outperform human prose on surface polish, louder warnings about labor-market disruption, and the steady conversion of AI from novelty into institutional infrastructure. If there is a breakthrough theme it is normalization.
1) The New York Times writing quiz made the authorship problem unavoidable
A New York Times blind writing quiz, highlighted by Kevin Roose, asked readers to compare human and AI-written passages. The result that spread was predictable, but the reactions were wild: across more than 86,000 responses, readers slightly preferred the AI-written passages overall, 54% to 46%. The quiz is limited, short passages are not the same thing as sustained argument, but that is also the point. On the kind of short, polished prose that many assignments still reward, machine writing is already highly competitive. If you want a taste of the visceral reaction check out the responses to the post from Roose.
Why it matters: Nik has long made the point that we in the academy care about writing in a different way than the rest of the world does. This result and the reaction, bring that home. It also undercuts the work we do building writing as a skill students (and future employees) will need to execute since readers don’t seem to actually care unless they are told they preferred AI writing and then get angry.
Link: Kevin Roose post: https://x.com/kevinroose/status/2031397522590282212
2) The Guardian faculty story is useful mainly as a sign that the assessment crisis is now mainstream
The Guardian’s interactive feature on AI, professors, students, and learning is worth noting less because it offers a deep diagnosis than because it shows the issue has broken into broad public coverage. Its examples — oral exams, handwritten work, more in-class writing, more attention to process — line up with what many faculty are already trying as they respond to the collapse of trust in take-home writing as a standalone measure. Separate coverage from NPR affiliates in the same period points in a similar direction: professors and students are making local AI rules, and those rules often do not match.
Why it matters: the Guardian piece should not be treated as the final word. But it does register something real: AI has pushed higher ed out of the abstract ethics phase and into redesign. The substantive question is not “Should we be worried?” It is “What kinds of work still show that learning happened?” Nik’s full response and critique is linked below. This is a piece that broke through to audiences outside higher ed, but the themes seem somewhat tired at this point.
Links:
https://www.meltsintoair.org/chatgpt/guardian-ai-critical-thinking-response
3) Student AI use is now normal, not exceptional
HEPI’s 2026 Student Generative AI Survey says AI use among undergraduates is now “near universal.” The report, based on 1,054 full-time UK undergraduates, finds that 95% report using AI in at least one way, 94% say they use generative AI to help with assessed work, and 65% say assessment has changed significantly in response to AI. HEPI’s own summary puts the issue plainly: the question is no longer whether students use AI, but how well they use it and how institutions are supporting them to do so responsibly.
Why it matters: We have seen a lot of surveys about student usage, but wow 94% is a big number! Too many campus policies still treat AI use as deviance from the normal academic baseline. That baseline is gone. The real divide now is between courses that explicitly teach critical, bounded, transparent AI use and courses that preserve a fiction of nonuse while students work around it.
Link: https://www.hepi.ac.uk/reports/student-generative-ai-survey-2026/
4) The labor story is splitting between warning and evidence
The labor conversation is getting louder, but not cleaner. Anthropic’s new paper presents a framework for tracking AI’s labor-market effects and says it finds limited evidence so far that AI has affected employment at the aggregate level; at the same time, it also reports suggestive evidence that hiring of younger workers may have slowed in more exposed occupations. That makes it a useful counterweight to more alarmed commentary, including the recent New York Times Opinion argument that an AI labor crisis is coming and policy should move now rather than later.
Why it matters: Higher ed should not build workforce strategy on vibes, but it also cannot dismiss the possibility of real disruption just because the macro data is still early. We need to move curriculum in areas where the disruption seems more concrete like Computer Science and Writing, but, as we have been saying for years, we also need to build more agile processes so that when changes look more concrete we can move quickly.
Links:
https://www.anthropic.com/research/labor-market-impacts
NYT Opinion mirror used for reference: https://extragoodshit.phlap.net/wp-content/uploads/2026/03/Opinion-The-A.I.-Labor-Crisis-Is-Coming.-This-Is-the-Solution.1.html
5) Enterprise AI is becoming institution-ready, which raises the governance stakes
OpenAI’s Enterprise/Edu release notes remain relevant here because they show the boring part of AI adoption becoming the important part. Recent updates include a Projects layout with separate Chats and Sources tabs, in-region data residency for synced Google Drive and GitHub apps across supported regions, admin-controlled app actions, and, as of March 11, retirement of GPT-5.1 models in ChatGPT in favor of newer versions. These are product notes, but they are also institutional signals: more source integration, more admin control, more model churn, and more dependence on centrally managed settings.
Why it matters: once AI systems connect to institutional files, research materials, and shared workflows, the issue stops being only pedagogical. It becomes a governance question about permissions, retention, regional data handling, and dependency on external vendors whose models and policies keep changing. Strap in because we are headed for some thorny conversations between academics and information security offices. We also think there is an under reported story lurking here. OpenAI and Anthropic are start-ups without years of experience working in complex enterprise and edu ecosystems. It is possible the high-stakes of this next moment will push organizations toward more experienced partners like Microsoft and Google.
Link: https://help.openai.com/en/articles/10128477-chatgpt-enterprise-edu-release-notes
From Our Work
https://www.meltsintoair.org/chatgpt/guardian-ai-critical-thinking-response
https://www.meltsintoair.org/unfixedpodcast/ep-24-teaching-writing-in-the-age-of-ai