Closing The Gap Part 3: We can reliably spot AI writing
In March of 2023 Nik and I published a short piece in response to a (terrible) Vice article about professors spotting AI writing. The article is full of cringy quotes like “ChatGPT is so bad at essays that professors can spot it instantly.” We thought this out of touch at the time, but we could have never imagined this myth would persist for years. Somehow, we keep interacting with faculty and reading articles with unfounded claims about spotting AI writing. Guides continue to exist and be written on college and university websites. Tech publications and journalists somewhat ironically continue to publish advice on this as well. In a bizarre exchange we have cited before, Casey Newton from Platformer and Hard Fork recently said of AI writing “I can always tell.”
First, we cannot always tell. Study after study confirms that we are incredibly bad at detecting AI writing as humans. The fact that we have peer reviewed quasi-experimental research on this exact topic and academics continue to propagate this myth is confusing. I can kind of understand a for-profit blog with a headline like “How To Tell If Something Is Written By AI: 6 Telltale Signs To Look For” even if it is pretty bad, but aren’t we supposed to be held to a higher standard of evidence? In the academy we are supposed to be driven by data and expertise. Why are we still leaning on “one weird trick” when we have actual evidence?
Second, we can spot people who are bad at using AI. Consider some of the advice common to these guides like an abundance of em-dashes or bullet points. Or perhaps a lack of engagement with current events. You might find students doing this and in conversation discover they have used AI. However, this is just selection bias since you never know about the students who have used it, but you don’t identify. There are literally AI tools designed to circumvent these tactics which make it harder for machines or humans to spot AI. So yes, you might identify a student who knows about AI, but does not know about the array of humanizing tools, how does that solve anything?
We understand the attraction of solutions like this because it recenters our expertise and intellect. You are special. Trust your instincts. This is all comforting to a class of intellectuals who feel threatened by this technology, but it is hope based on a lie. We recently argued in an EdSource piece that professorial expertise matters more than ever in an AI age but because it allows us to identify quality outputs and prompt well, not because it allows us to spot writing quirks and infer authorship.
In addition to being intellectually dishonest, there is a greater risk we pointed out in 2023 we want to emphasize again. Trusting your instincts “introduces a different kind of bias where professors are likely to target students who we don’t think are capable of good work and let other students slide. Historically, this breaks down along the lines of class and race where poorer and darker students are more likely to be called out based on unscientific vibes.”
So let’s have our conversations about design, engagement, and a different kind of education, but to get there we have to give up the myths and misconceptions that are holding us back.
Missed the earlier parts of the series? Find them here: