Unfixed Newsletter April 29-May 13

Editors’ note

The past two weeks did not see a raft of new models, instead we saw new aspects of existing debates. There is an age-old question about who owns course materials that is being answered in different ways around AI at Arizona State. The jobs and AI question just got more complicated and we are seeing concrete policy changes at a flagship university. 

We are moving to a monthly cadence over the summer because the higher ed news tends to slow down. Look for our next newsletter on June 10th. 

1) ASU’s Atom turns faculty teaching materials into AI inputs

Arizona State University quietly launched Atom, an AI course-building tool that uses ASU faculty materials to generate customized learning modules. Several faculty members said they were surprised to learn that their lectures, slides, assignments, and videos were being clipped and repackaged by the system.

Why this matters: Where to even begin? ASU is often a leading indicator of higher education directions so there is almost no chance this is just an ASU story. There is an open question about who owns course materials. The new question is, if a faculty member does, do they have the right to keep those materials out of training data? Obviously this also raises questions on the role of the instructor and whether this is part of a larger trend toward outsourcing faculty work to AI models. 

https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2026/04/29/faculty-concerned-about-asus-new-ai-course

2) A major ChatGPT learning study gets retracted

Humanities and Social Sciences Communications retracted a 2025 meta-analysis on ChatGPT and student learning, citing discrepancies that undermined confidence in the analysis and conclusions. The notice says the authors did not respond to correspondence about the retraction.

Why this matters: This article was cited quite often as evidence LLMs can help with student learning. The retraction of the study does not mean they cannot, it does mean we don’t have the high quality evidence on the topic we thought we did. We have to think there is better quality research on this topic in the pipelines and we certainly hope so because it is desperately needed.

https://www.nature.com/articles/s41599-026-07310-z

3) The AI jobs debate gets harder to summarize

Challenger, Gray & Christmas reported that AI was cited for 21,490 April job cuts, or 26 percent of the month’s total. At the same time, a16z argued that the “AI job apocalypse” narrative relies on a fixed-work assumption and misses how automation can also create new demand.

Why this matters: This is such a difficult topic. A few months ago we weighed in on this for EdSource and the evidence of job displacement seemed stronger then than it does now. Right now a lot of the people shouting about “AI layoffs” seem to be covering for other underlying reasons. This matters for us in terms of recruitment and the story we are telling to students. There are a lot of “vibes” around jobs and AI right now–we have to cut through that with a clear story about college and student futures. 

https://www.challengergray.com/blog/challenger-report-april-job-cuts-rise-38-from-march-ytd-cuts-down-50/

https://www.nytimes.com/2026/05/03/opinion/ai-jobs-unemployment-silicon-valley.html

4) Princeton faculty turn back the clock. 

Faculty voted to require proctoring for all in-person exams beginning July 1, ending a 133-year practice of unproctored exams under the university’s Honor Code. 

The policy proposal cites generative AI and small personal devices as major reasons for the change, arguing that misconduct has become harder for other students to observe and report.

Why this matters: This is one of the clearest signs yet that AI is moving assessment policy, not just classroom conversation. Faculty should expect more institutions to revisit old assumptions about take-home work, in-class exams, honor codes, proctoring, and the burden placed on students to police each other.

https://www.dailyprincetonian.com/article/2026/05/princeton-news-adpol-proctoring-in-person-examinations-passed-faculty-133-years-precedent

5) Europe says AI in education should stay teacher-centered

The Council of the European Union advanced conclusions on teachers in the era of AI, emphasizing pedagogical purpose, human agency, teacher autonomy, professional judgment, AI literacy, privacy, equity, and workload. The document says AI should support teachers, not replace or isolate them.

Why this matters: This is a useful counterweight to tool-first rollouts. Faculty do not need another vague AI enthusiasm memo. They need governance that protects academic judgment, workload, privacy, and the ability to say no when a tool does not fit the learning goal.

https://data.consilium.europa.eu/doc/document/ST-8262-2026-INIT/en/pdf

From Our Work

What we got wrong in “AI went MAGA”

In this blog post we review our Fall 2025 essay from Inside Higher Ed on politics and AI and what it means for universities. A lot of the predictions we made ended up being incorrect and the political cleavages seem much more complicated than traditional culture war issues. 

Ep. 28 AI Retrofit Workshops: Redesigning Teaching for the Age of AI

In this episode of Unfixed, we unpack what it actually means to run AI retrofit workshops—where faculty redesign assignments and learning outcomes for a world shaped by generative AI. Rather than hype or quick fixes, this work starts from disruption: courses no longer behave as intended, and instructors are left to figure out what still holds.

Next
Next

What we got wrong in “AI went MAGA”