Closing The Gap Part 2: AI and Personal Reflection

The first myth we want to address in this series is the idea that we can AI-proof our assignments by assigning personal reflection. This was the hook of The Atlantic article that kicked this series off. The author, a professor, says in 2025 that AI can’t answer personal questions. Alas, this is simply not the case. We routinely hear this from the faculty we work with. And we think the reason for the longevity of this myth is related to prompting, or really the lack of experimentation with prompting. 

Here’s how this works. I assign a personal reflection that asks students to critically think about how they are shaped by social inequalities (race, gender, class) in their everyday lives. Student A might copy the assignment instructions into an LLM and write to the LLM “do this assignment for me.” Student B copies the instructions but in this case the student adds personal details about themself such as race and ethnicity, income, age, place of residence, and such. 

I’ve tried these scenarios out and the former is generic and bland–but not wrong–and the latter is personal and grounded, and could be improved with just a little editing. We strongly suggest everyone try this out for themselves with their own assignments. It is important to note that there are websites that allow users to “humanize” their AI generated text, adding another weapon in the arms race we advise to avoid.  

When ChatGPT launched in November 2022 the idea that it couldn’t replicate personal reflection writing was false. Zach and I were early to point this out, in May 2023 we published Reflection and Perspective: Yes it does that too. You can read our early experiments in that piece. It still stands two and a half years later. The models have only gotten better in that time. 

So here we are in 2025, and this myth persists and there is a whole pedagogical self help genre dedicated to it. Here is a piece that recommends discussion boards and personal journaling. Here is another that recommends personalization and originality. And yet another that says “make it personal” and gives some banal questions like “How does [assignment topic] relate to an experience you’ve had in life?” While the authors of these pieces seem earnest, it is a bit of malpractice to tell instructors that they can AI-proof their assignments by making them personal and self reflective.   

Why does this myth persist? One, it is intuitive and builds on the idea that these machines do not experience the world the way we do (yes, but they can fake it really well). It did work for a brief moment. Two, it provides comfort in an uncomfortable reality: yes AI can do just about everything we do in higher ed good enough.   

In challenging this myth, we are not suggesting that reflection and personalization are not worthy goals or strategies for student learning. Even if students are entering their bios into an LLM and reading and engaging with the output, perhaps even in a dialog, they still may gain insights into their lives and their worlds. I’ve been using this approach with some success. As we’ve said elsewhere, students have been cheating forever and they will continue to do so in the AI era, so our goal is not zero cheating. Rather, our goal is meaningful and purposeful assignments that minimize the temptation or better yet integrate AI into critical thinking and reflective learning activities without the use of written papers. 

In our next entry in this series, we take on the myth of the “I know it when I see it” approach to evaluating student work. 

Nik Janos

Professor of Sociology at California State University, Chico.

https://nikjanos.org
Next
Next

Closing The Gap Part 1: Beginners and Advanced Users