Elites are distorting the AI discourse
A recent episode of the Hard Fork podcast covered a decision by Business Insider allowing journalists to use AI tools for first drafts. The conversation was punctuated by tropes about AI writing that seem ancient. It was so out of step one might pause the show to check and see if perhaps it was from two years ago.
The duo was aghast at the idea that AI writing would intersect with the work of journalism at one point remarking “There is not that far of a bridge from AI is writing the first draft of my story to AI just has the job that I used to have.”
In another quote that seems like it was from early 2023 Kevin Roose says “the AI writing still has sort of various stylistic flaws in it. You can kind of tell when you're looking at something.
Casey Newton responds “I can always tell. I can always tell.”
Empirically not true as study after study debunks the idea that humans can differentiate between human and AI writing. This is representative of an elite perspective on AI that while certain things are true for the masses, they are not true for them. Our contention is that people in elite institutions are distorting the discourse about AI making it more difficult to have real conversations.
A few weeks ago Princeton Professor Graham Burnett was interviewed following his viral piece in the New Yorker from April. He was more candid in the interview than in his writing and remarked on the future of higher education, “Princeton, Harvard, Yale, Stanford, they're not going to change that much. But a lot of the other institutions were seeing it already. They [have] got to get dynamic or die.” So we once again see the trope of, other people will need to change, but what I do is sacred.
This also surfaced in Ezra Klein’s interview with Ethan Mollick from 2024. Klein notes “I know this wildly powerful technology is emerging beneath my fingertips, as much as I believe it’s going to change the world I live in profoundly, I find it really hard to just fit it into my own day to day work.” Spoken like a man who has an entire team of researchers and editors behind him.
This technology is powerful, transformative, and it will change everything–except what I do–you have to stay away from my elite profession/university/craft. There are myriad problems with letting elites shape the discourse about AI, we will unpack a few of them.
First, they are probably wrong. If the last three years have taught us anything it should be that the rapid acceleration of model capabilities mean almost no domain is safe. Two years ago we were shocked GPT 4 could pass the Bar exam. Earlier this month Google and OpenAI coding tools won top placing in an international competition of the best computer coders. The idea that the models just will not be able to hack it in the world of journalism or podcasting is absurd. Elite apprehension of the technology is linked to the leveling effect it can have, enabling people outside of prestigious and well funded institutions access to some of the same support they already enjoy.
Second, even if they are right for them–they are wrong for us. Most of us are not at elite institutions with elite resources. We don’t have research teams or editors. We review each other’s work before we send it out and use ChatGPT as a research assistant. Discourse like this lets us imagine ourselves at institutions like Princeton. The reality is, when Burnett says “ I don't really care especially about their job training functionality. There are plenty of other people who can worry about that” we are the “other people” he is talking about.
Anecdotally, elite opinions are seeping into everyday conversations about AI and higher ed. When family members text after reading a high-profile piece in the The New Yorker or New York Magazine asking if AI is “destroying students’ brains” or if “kids just don’t want to write anymore,” it shows how elite discourse frames the issue for elite-adjacent readers. When the New York Magazine article went viral in May we heard about it so much we wrote a response to it highlighting the internal inconsistencies and the lazy reporting that ignores two years of pedagogical innovation. Moreover, at state schools and smaller institutions, the deeper concern is how elite narratives shape the perceptions of prospective students and their parents, potentially discouraging them from attending and widening the divide between elite universities and the rest of us.
If you are at an elite institution–good for you. Sometimes we wish we were also insulated by billion dollar endowments and generational prestige, but we are not. We are worried about job placement. We do concern ourselves with how this technology will impact our professions and those of our students. This is the work. Our call to all of us in state schools, community colleges, and not-that-well-funded outlets is–tune this message out. We are the other people they are referring to and we have to figure this thing out on our own. Elites are used to the rules of life not applying to them and whether they are right or wrong–they are not helping