Unfixed Newsletter — April 16–28, 2026

Editors’ note: The theme from the past two weeks is that nothing is slowing down. A year ago there was a drumbeat that model training had hit a wall, instead we have OpenAI releasing a flurry of new models with radically improved capabilities while new Claude models are in waiting. There are new educational fronts including a possible competing model to traditional higher education. There are also new fronts in the intersection of public policy and AI. No time to catch our breath–it is everything, everywhere, all at once. 

1) OpenAI is trying to own the whole desk

OpenAI spent the week tying together GPT-5.5, workspace agents, and ChatGPT Images 2.0 into a broader vision of AI for “real work.” The combined message is clear: these systems are moving from chat assistants to tools that draft documents, automate workflows, and generate polished visual material. 

Why this matters for higher ed:

The conversations we have had about writing, coding, and idea generation have now moved into almost every domain. More student and faculty work will arrive as polished prose, slides, spreadsheets, and visuals, which makes surface quality even less useful as a proxy for learning. Faculty will need assignments that reward process, source judgment, and revision decisions, not just finished output.

Sources: OpenAI on GPT-5.5, OpenAI on workspace agents, OpenAI on ChatGPT Images 2.0.

2) Khan Academy, TED, and ETS are taking a direct shot at the degree

Khan Academy, TED, and Educational Testing Services (ETS) announced the Khan TED Institute, a new higher-ed model built around mastery, applied AI, and “human skills” like communication and collaboration rather than seat time alone. It is still early, but it is one of the clearest attempts yet to build a lower-cost credential explicitly for an AI-shaped labor market.

Why this matters for higher ed:

We don’t want to overhype this. Private companies or start-ups claiming to rewrite the rules of higher education appear and then vanish all the time, but these are brands with serious recognition in and outside of education spaces. This puts fresh pressure on colleges to explain what students are paying for beyond content delivery. It also keeps pushing the same uncomfortable question: if AI lowers the value of routine knowledge work, what exactly should a degree certify now?

Sources: Khan Academy announcement, ETS announcement.

3) Automated writing feedback is reproducing old bias

New Stanford-led research, covered this week by Hechinger, found that identical essays received different AI feedback depending on how the writer was labeled. Essays attributed to Black students got more praise and less criticism, while essays attributed to white students were more likely to get comments about structure, evidence, and clarity.

Why this matters for higher ed:

AI for student feedback was already a fraught space with strong opinions on all sides. This is a new wrinkle. It may reflect bias in training data, model design or something else. Regardless, AI feedback is not neutral infrastructure just because it sounds supportive. If campuses normalize automated feedback, they risk hard-coding unequal expectations into writing support, tutoring, and assessment workflows.

Sources: Hechinger Report, arXiv preprint.

4) The labs are moving into politics in public

OpenAI’s industrial-policy memo and the Noema response to it mark a new chapter: the labs are no longer just lobbying quietly or shipping products. They are making public arguments about labor, training, research funding, and what kind of social contract should govern the AI economy. 

Why this matters for higher ed:

Faculty and universities are not just downstream users in this story. They are being positioned as workforce pipelines, research partners, and legitimacy providers. That makes governance and political framing part of the higher-ed AI story. The question for us in higher ed is are we going to help shape this story or be responding to it? We cannot spend political dollars like the billionaire class, but we do have a voice.

Sources: OpenAI: Industrial policy for the Intelligence Age, Noema response.

From Our Work

Future of Higher Education with Nick Dirks

Educational Policy for the Industrial Age

5 AI Myths and Why We Must Move Past Them (Inside Higher Ed)

Next
Next

(Educational) Industrial Policy for the Intelligence Age