Closing the Gap Part 4: “AI Proofing” Assignments
We are closing out our AI myths series with a grab-bag of bad/outdated advice we have seen online and heard propagated in groups. Broadly, these all fall in the “AI proofing” genre. This continues to be a popular approach as we get requests from faculty for help AI proofing their courses. Many of the examples here and in the other parts of the series come from proofing guides that have proliferated in the past three years.
The Calendar Trick
The 2023 versions of ChatGPT were trained on data only up to 2021. So a popular tactic early on was to ask questions about current events the systems did not have access to. This has somehow persisted as an approach. It is something we have both encountered working with faculty and it even shows up in modified form in some guides like this one from Congestoga College (2025) “In a Journalism course, for example, students might be asked to write a grant letter, but one that does not reference actual companies, current events, or community issues. To make this assignment more relevant and less AI-vulnerable, the grant letter prompt could be adjusted to require students to reference actual companies, current events, or community issues.” Obviously this advice is no longer viable as some models like Grok and Llama are fully integrated into real time social media networks and other providers are in negotiations with news outlets to have breaking news inform prompt outputs.
Trojan Horse
Last year a teacher went viral with the strategy of embedding hidden instructions in an essay prompt using small-white font. The idea is that the student will unwittingly copy-paste the whole assignment guide into the AI platform which will read the hidden prompt. The resulting output will incorporate something the instructor called for like incorporating references to a movie or a theorist not covered in class. Of course this would also work with STEM prompts and result in errant pieces of code or components of a proof that are unnecessary. We continue to hear from faculty who incorporate this practice.
At best this is “gotcha” teaching, where instructors move from teachers to pranksters. Is this the role we want to be playing? If your goal is to catch students in the act, our starting point is not good pedagogy. This is also an unreliable approach with a limited shelf life. The trojan horse is basically a variation of a prompt injection attack which is a known security issue for LLMs and one engineers are actively trying to resolve. The greater risk is that if an instructor relies on this as a tactic, it immediately absolves everyone who passes their “test.” This is somewhat reminiscent of the problems we identified in the last entry in this series–you aren’t spotting people who are using the tools, you are only spotting people who are bad at using the tools.
“You don’t know me” AI and the classroom
Several Universities recommend tying assignments closely to information from the classroom. So if for instance you had a great discussion on a concept in class you should ask students to incorporate elements of the discussion in an essay or other output. Unlike the other approaches we have covered, this one is founded in good pedagogy. This sort of woven approach to instruction is absolutely a best practice, just not an effective way to deter the use of large language models. A student can simply add the context to the prompt and the resulting output will incorporate it. We do think there is merit here insofar as we think about this as a best practice in teaching students how to prompt well–this might be a part of it, but this is not a reliable way to keep students from using models to produce content.
As we wrap up this series we want to acknowledge that most of the things we have covered in it were true at some time which is evidence of how fast this landscape is changing. Trying to get students to not use these tools seems to be a losing battle and we are not even convinced it is worth fighting. In this series we did not even touch on the necessity of working with these tools to prepare students for a competitive job market where employers increasingly expect AI literacy and tool usage. The point of this series was to clarify a series of misconceptions and the persistence of outdated advice. Nik and I recognize that people are in different places with AI depending on some ethical questions and how much you have used the tools. We want to move the conversation forward and away from these obsolete tricks. There are plenty of unsettled questions about sustainability, learning loss, human creativity, and labor. We are ready to have more of those conversations and fewer instances where we have to break the news to someone that the “one neat trick” they learned from TikTok two years ago doesn’t work anymore.
Missed the earlier parts of the series? Find them here:
Closing the Gap Part 1: Beginners and Advanced Users
Closing the Gap Part 2: AI and Personal Reflection
Closing the Gap Part 3: We can reliability spot AI writing