AI Personality Disorder (AIPD)

In March, I published a piece called The Assistant for the Rest of Us. Basically, it was about how AI tools, such as ChatGPT and NotebookLM, can assist professors, mostly at teaching heavy universities, to be more productive and find some joy in and respite from their work.

Shortly after, many people began to notice that ChatGPT was turning into the worst and most annoying sycophant. It was like having an underling that just constantly told you how AMAZING you are. Who doesn’t like flattery and compliments? But like others, I had a feeling something was off.

Here is the ever prodigious Ethan Mollick criticizing OpenAI for launching such an annoying and potentially dangerous tool.

Ethan Mollick post on X.

I started reflecting and second guessing. What if ChatGPT had been blowing smoke up my butt this whole time? What if hundreds of millions felt the same way? Could I trust it in the role of mini-assistant if it just gave me back what I wanted to hear.

Kevin Roose and Casey Newton, on the Hard Fork podcast, call out OpenAI for “engagement hacking,” or the process that tech companies use to create sticky products that lure people back over and over (e.g. TikTok, doom scrolling, and yikes a Meta sex bot).

One thing we know about humans is that self-control—especially when praise and flattery flow like wine—is not one of our greatest strengths.

ChatGPT as a sycophant had its moment on social media. The general vibe was that some trust was broken and new questions were raises:

  • Are AI personalities fixed or can we expect them to change regularly?

  • What then are the implications and potential problems of AI personalities and their disorders?

  • Who and what decides an AI personality?

  • What consequences can we expect if AI assistants change personalities or develop personality traits not beneficial to users?

Come for the research assistant, stay for the ego boost? That doesn’t sit right.

The social media backlash was enough for Sam Altman to publicly say to the effect “We hear ya, we’re retooling.”

Sam Altman post on X.

My take: beyond the training data, the opacity of generative AI’s personality is a problem. Consider, we might get used to one kind of assistant until suddenly, for good or bad reasons, its personality changes. This makes for an unreliable assistant. What’s more, kissing butt is not what most scholars want but neither do we want our worst critic (reviewer 2, I see you). 

In contrast to ChatGPT’s exuberant personality is the flat, zero personality of NotebookLM. Sometimes it's exactly the vibe I need, like a computer that can do a little bit more. 

Yet, there are reasons for an assistant to have personality and working with ChatGPT there is a bit more surprise and delight. If OpenAI continues to lust for engagement hacking, its usefulness for academic research will surely be curtailed–not something that will keep Sam up at night, I’m sure.  

Coming full circle, there are plenty of scholars and plenty of graduate assistants for whom sychophanty is coin of the realm. But for the humble of us, we want helpful and predictably unpredictable assistants. This episode certainly wakes us up to the limitations of the models and the companies that own them. 

Nik Janos

Professor of Sociology at California State University, Chico.

https://nikjanos.org
Previous
Previous

That terrible “Everyone is cheating their way through college” essay

Next
Next

Listen: Nik was on Teach Smarter podcast