AI Is Not Antithetical to Human Intelligence: What The Guardian gets wrong
Every few months a new “AI is destroying higher education” piece hits the social media circuit. This month, March 2026, Alice Speri in The Guardian brings us the woe from the humanities.
In the article, Speri reports a parade of faculty voices that can be summed up this way: by and large, they express the view that reliance on artificial intelligence is “fundamentally antithetical to the development of the human intelligence” they are tasked with guiding.
Antithetical is a strong word. It implies irreconcilable forces: you either have AI or you develop human intelligence. Not both.
That framing deserves scrutiny.
To be clear, Speri is reporting what faculty are feeling. And the pain and uncertainty she documents—especially among humanities professors—is real. But reporting a mood and validating a thesis are not the same thing. The question is whether the evidence supports the leap from “this is disruptive and destabilizing” to “students are becoming incapable of thinking.”
The “Incapable” Claim
Literature professor and novelist Michael Clune tells Speri he is sure that AI has already left students “incapable of reading and analyzing, synthesizing data, all kinds of skills.”
Incapable.
Aside from inspecting his methodology (or is it just vibes?), that is an extraordinary claim. Incapable suggests collapse, not struggle. It suggests a population-level cognitive failure.
I am not sure what the difference between his students and mine are. My students are distracted, stressed, uncertain, and yes—offloading work to AI, as anyone’s are. But my vibe is that they are definitely not “incapable” of the things Clune accuses them of.
Here is where we need to be careful.
First, we have previously written about the trap of pre-AI nostalgia for Inside Higher Ed and we continue to see this play out. How people are remembering the pre-AI past does not match the reality we lived through.
Second, there are many other things going on in students’ lives, not just AI. Moreover, our classes are often not the most important thing—gasp—in their lives.
Third, as we’ve written in Inside Higher Ed, getting a clean baseline assessment of student learning is broken in the age of AI. Our traditional methodology is not up for the challenge. If your primary instrument for measuring thinking was an at-home essay or an online exam, that instrument is now compromised. We are all projecting vibes in the absence of good data.
But projecting vibes cuts both ways.
If someone says students are now incapable of analysis, that is a systemic claim. It requires systemic evidence. In the absence of that evidence, what we mostly have are frustrated interpretations of obsolete assessment tools.
The Real Obsolescence
The doom articles have a truth embedded in them: if you relied on at-home essays or online exams to measure student thinking, reading, analysing, and synthesizing data, then your tools are obsolete.
That is not the same thing as saying thinking is obsolete.
As I wrote in Builders last fall, creating new assignments and activities can be fun. It can also suck. And no matter what, it takes a lot of time and effort. It means abandoning prompts you refined for years. It means rebuilding rubrics. It means experimenting and sometimes failing.
But I am of the opinion that this is what I am paid to do: adapt and figure out how to help students think, learn, and practice skills. If it takes effort and time, then it is worthwhile. There is no ready-made solution. It requires trial and error and experimentation.
What worries me about the “antithetical” framing is that it treats adaptation as surrender. If AI exists, then human intelligence retreats. But that only follows if we assume that thinking must happen in isolation from tools.
We already offload cognition constantly. Calculators. Search engines. GPS. Citation managers. Spellcheck. The question is not whether we offload. It is whether we retain judgment, evaluation, synthesis, and direction. That’s why Zach and I argued that faculty expertise is more important than ever.
Who Is Doing the Critical Thinking?
The Guardian headline invokes the death of “critical thinking.” It imagines a world in which our students—and by extension all of us—come to offload our thinking to the thinking machines.
We already have been and are doing that.
But does that mean we’re not thinking and acting?
It works both ways. I still hear people say that AI is so dumb that it can’t do X. Almost every time it is something they are parroting from discourse online or something someone else told them—not something they themselves have tested rigorously.
Who is doing the critical thinking now?
AI critics and skeptics push so many shibboleths and myths that it undermines the seriousness of their objections. That does not mean the objections are trivial. It means they deserve better than reflexive dismissal or civilizational panic.
As Ray Schroeder argues, critical thinking in 2026 includes knowing how to interrogate AI output, how to test its claims, how to refine prompts, how to compare its synthesis against primary texts. That is not the death of thinking. It is a shift in where thinking is visible.
Humanities Pain Is Real, So Is Experimentation
The pain and uncertainty faced by humanities faculty, particularly English and writing instructors, is very real. STEM and social sciences have a different relationship to AI and its potential and obstacles. Writing-intensive disciplines feel this pressure directly.
But even in English and writing programs, people are not simply surrendering. They are building.
Tamara Tate, Stephanie Tran, and their colleagues at the Digital Learning Lab have developed PapyrusAI, a tool designed to help instructors and students write in the age of AI. It is best thought of as embracing Ethan Mollick’s co-intelligence thesis—not outsourcing thinking, but structuring collaboration between human and machine.
That is a very different framing from “antithetical.” It assumes friction, risk, experimentation, but not irreconcilability.
We Do Not Get to Ignore This
Zach and I have been saying from day one that we can’t ignore this technology or wish that it would go away. It is not going away.
We have to look at it in the face and work to understand it. We have to experiment with how it can further learning, thinking, and writing at the same time that we foster creativity and humanness.
Declaring students incapable may feel cathartic. It may reflect real frustration. But it risks collapsing a pedagogical challenge into a civilizational diagnosis.
Doom is easy; adaptation is harder. But adaptation is the job.