Unfixed Newsletter — March 18–April 1, 2026
Unfixed Newsletter — March 18–April 1, 2026
From the hosts of Unfixed and authors of Melts Into Air, top AI stories for higher education, every two weeks.
The last two weeks are starting to reveal a maturing technology, wrinkles and all. Institutions that partnered with the labs are dealing with visible pushback just as new contracts are being negotiated, while the tools themselves are pushing into new spaces and recoiling from others. We have seen steady capability expansion before, but this stretch does not feel like linear growth. It feels more unstable: a period when adoption is real enough that the contradictions are getting harder to smooth over.
1) Canvas wants AI inside the course shell
Canvas’s new IgniteAI Agent is pitched as help with rubric generation, content alignment, and discussion reviews, while stopping short of full grading automation. Instructure says the aim is to offload lower-value tasks, but the broader implication is that agentic AI is now moving directly into the LMS itself.
Why it matters for higher education:This is one of the clearest signs yet that AI is becoming part of routine teaching infrastructure, not just a side tool students or faculty use on their own. The practical question for faculty is which parts of teaching become harder to define as distinctly human once the course platform itself starts offering to do them. The specifics are still in development so it is unclear what this will mean for faculty (and students) who want to avoid AI. This is also likely to raise alarm bells for faculty and IT staff reeling from the Einstein.AI debacle we covered last time.
2) Claude’s computer-use push turns the desktop into an AI workspace
Anthropic now lets Claude point, click, browse, open files, and use apps on a user’s behalf, with task handoff available from a phone through Dispatch. Anthropic says the feature is still early, can make mistakes, and should not be used with sensitive data. For now, this feature is available only on Apple Mac computers with either a Claude Pro ($17/month) and Max ($100/month) plan.
Why it matters for higher education:This is bigger than a better chatbot. For faculty research, writing, coding, and administrative work, it signals a shift from systems that generate text to systems that act across software environments, which raises immediate questions about permissions, security, reproducibility, and what counts as your own work when an agent is operating the tools for you. There is also an expansion of the “always on” professor here. New capabilities mean (on a Mac only for now) you can direct Claude to do things from your phone while you are in a Senate meeting or at your kid’s track meet. Is that better? Maybe? Does it exacerbate a problem many of us feel that we are always supposed to be on, working, and accessible? Probably.
Links: https://claude.com/blog/dispatch-and-computer-use
https://podcasts.apple.com/us/podcast/how-to-use-claudes-massive-new-upgrades/id1680633614?i=1000757158284
3) Faculty are pushing back on campuswide AI deals
At CSU, a $17 million OpenAI contract is up for renewal in June, and faculty opposition has become newly visible. Similar conflict is unfolding in Colorado, where a ChatGPT Edu rollout has been delayed amid faculty concerns about priorities, privacy, and the terms of institutional adoption. The controversy with the Department of Defense/War has also complicated the relationship as the models become politically coded in new ways.
Why it matters for higher education: Once AI becomes a procurement decision rather than an individual choice, faculty end up living with default tools. This is where AI adoption becomes a governance story with direct classroom consequences. The politics of this are also intersecting with actual work-flows. Let’s say the pushback is successful, what does that mean for faculty who have built classes and workflows on the assumption all their students have access to ChatGPT Edu? Outside the classroom, faculty are doing research with and on LLMs. Without the protection of the Edu ecosystem that data is much less secure. Platform lock-in for higher ed is always an issue, but this one seems bigger than normal since the tech interacts with every part of the workflow for a lot of us.
4) Students are learning AI use from social media before they learn it from school
A new survey found that 70 percent of learners use AI daily or weekly for education, and 48 percent look to social media for help using AI, compared with 23 percent who seek help from school-related sources.
Why it matters for higher education: Faculty cannot assume students are arriving with careful norms for citation, verification, or appropriate use. More students are building their AI habits from peer advice and platform culture, which makes explicit expectations and process-based assessment more necessary, not less. We can read this differently as well–does this mean the stuff we are offering on campuses is not good so students are going elsewhere or does it mean they go elsewhere so why bother? Either read might be viable depending on your institution.
5) The model race is starting to break the product roadmap
OpenAI cancelled Sora, the video generating social media app, and has paused plans for ChatGPT’s adult mode indefinitely, while reshuffling other products and priorities as competition intensifies. The larger signal is that competitive pressure is making roadmaps less stable, even as users are being encouraged to build workflows around these tools.
Why it matters for higher education:With few exceptions, we don’t think faculty were building curriculum around the impending release of ChatGPT adult mode. We do think this has to give us pause as we try to adapt to environments where the labs change direction abruptly. What if the token price for Claude Code skyrockets and you had relied on low/no cost access for students? What do we do if we have a host of custom-GPTs or Gems (Google) built and then the platform limits how they can be shared and used? We do not have insider knowledge of any of these things, but it has even given us pause as we consider what is stable enough to build on.
From Our Work
AI agents and the problems of start-up culture for higher ed: https://www.meltsintoair.org/chatgpt/ai-agents-and-the-problems-of-start-up-culture-for-highe
Are Asynchronous Online Classes Broken? AI, Trust, and the Future of Online Learning: https://www.meltsintoair.org/unfixedpodcast/asynchronous-is-broken
Zach was on the latest Alchemy webinar about AI agents: https://www.meltsintoair.org/chatgpt/zach-was-on-the-latest-alchemy-webinar