What Does It Mean to Fight AI?

In our first ever blog post we mapped out three likely responses to what was then the upcoming generative AI disruption of higher education. We said that some at the university would ignore it, others would fight it, and still others would embrace it. 

Since ChatGPT was launched and that first post, we have been arguing for a critically minded embrace and adaptation to generative AI. But we regularly talk to people whose disposition is to fight these technologies. In this post, we want to explore the logic behind the fight position. Understanding what motivates this disposition and what actual organizational pathways people with this disposition might take. As AI technologies continue to become part of university life, opposition to them will shape not only the conversation and vibe around them but likely the implantation and use of them. 

Generative AI was released at the exact time that many people's adoration of computer technology, such as smartphones and social media, had crested and Big Tech had a political bullseye on it. Facebook, Google, Apple, Amazon, and Microsoft grew to be juggernauts over the past two decades and by 2022 were all facing public skepticism, if not outright hostility. Moreover, governments around the world, including the US, Europe, and elsewhere, were bringing antitrust actions and introducing new regulatory measures in some form against most of these companies. In comes generative AI, unleashing a new disruption to society. It is understandable that tech skepticism might morph into tech nihilism. 

As institutions, universities and the people who work at them are seeking human progress around thinking, writing, reading, and behavior. Therefore, it is no surprise that anti-AI sentiment would grow. Generative AI is demonstrating that universities, and faculty in particular, do not have a monopoly on these skills, goals, and values. We have also argued that part of this resistance is rooted in concerns that AI threatens our jobs and devalues the meaning behind the work that we do. 

When we listen to and speak to faculty and staff, there is a split between those who have strong opinions about authenticity versus inauthenticity. Those who dislike generative AI have a revulsion to its perceived inauthenticity. For example, a student who painstakingly writes a term paper by doing the hard work of reading, thinking, and writing is said to produce something authentically human. Same for the professor writing a handcrafted review of a colleagues’ Retention, Tenure, and Promotion progress. In both cases, authenticity is in the struggle; it is in the work. 

There are additional reasons why people are in the fight camp. There are legitimate concerns about sustainability, privacy, and the power these corporations wield. OpenAI insider Karen Hao’s new book Empire of AI looks to be a genre defining contribution. What we are really interested in is–what does fighting look like right now? Cynically, it is people complaining in meetings and on social media. More substantively, we have seen some interesting tactics and want to speculate about some others. 

First, professors are mobilizing their students. Often, students are the most powerful activists on campus, especially on campuses with low enrollment. Professors are infusing curriculum with critique about AI or at least pointing out some of the issues noted above to their students who are then voicing their concerns to administration.

Second, fighting can be woven into existing frameworks of advocacy. We are part of the California Faculty Association which represents instructional faculty in the CSU. CFA has been extremely vocal and critical about the relationship between the CSU and OpenAI. This has included an allegation of breach of contract and powerful rhetoric about using AI to replace faculty. The CFA does wield power, not just influence, and we may see AI in the next collective bargaining agreement in a contest that resembles the recent writers strike/agreement in Hollywood. 

Third, this is more speculative and not specific to Universities, but we may see industrial sabotage as part of the fight disposition in the future. There is a strong history of this from reactions to the mechanical loom for fear of labor disruption to sabotage of logging and mining operations that threaten the natural world. Recently, we even saw the assassination of a corporate CEO as part of a personal/political agenda. We certainly are not advocating for violence, but we want to point it out as a possibility. 

When we first wrote about the “fight” camp in relation to AI we were mainly thinking about what goes on in the classroom. As we have seen institutional adoption and political evolution we have to acknowledge that the scope of the fight is much bigger than a syllabus policy–these advocacy positions are solidifying across institutions and societies as we come to grips with how this technology works and our relationships with it. 

Nik Janos

Professor of Sociology at California State University, Chico.

https://nikjanos.org
Previous
Previous

Typing, talking, Googling: seeing the AI-first generation

Next
Next

That terrible “Everyone is cheating their way through college” essay