Disruption within a disruption

November 17, the Friday before Thanksgiving, a shock came out of OpenAI, the company behind ChatGPT. The board of directors had fired Sam Altman CEO and public face of the company. What? Jaw drop for sure. His firing was a few weeks shy of the one year anniversary of the public launch of ChatGPT 3. The entire Thanksgiving week was a textbook case of a corporate shit show, perhaps it will go down as one of the most poorly managed leadership changes in recent corporate history. By the Monday after Thanksgiving November 27, Sam Altman was reinstalled as CEO of OpenAI and the company now has an almost entirely new board, with a major role played by the CEO of Microsoft Satya Nadella in the resolution. This post is not about the craptastic mismanagement of the company that produces ChatGPT and DALL-E, which have basically become the Kleanex of generative AI products. Rather I take this whole episode as a warning, as a disruption within a disruption.

I spent way too much time during Thanksgiving break following minute by minute changes in the story, reading social media and tech media constantly, and pondering what the seemingly imminent implosion of OpenAI had to do with higher education. Zach and I have written almost a year's worth of material about how generative AI is disrupting higher education and I think the near implosion of OpenAI serves as a real warning about the possibility of disruptions within disruptions in the future.

I will be using two meanings of the word disruption. The first borrows from Clay Christensen’s theory of disruption, defined as the “phenomenon by which an innovation transforms an existing market or sector by introducing simplicity, convenience, accessibility, and affordability where complication and high cost are the status quo” (Christensen Institute). Zach and I have largely understood generative AI as a Christensen type disruptive innovation to status quo higher education. The second is the plain old English word disrupt, which means a disturbance that causes interruption to an activity or process.

As tools like ChatGPT become mainstays within the university, students, staff, faculty, and administrators, and many functions of the university, will become vulnerable to internal disruptions to the companies that make and sell these tools. This is generally true of most capitalist commodities, from cars to food to computers. Remember Covid toilet paper? Disruptions in both definitions of the word in the areas of production, distribution, and consumption of commodities, ownership models, and business models have real world impacts, often negative, on users. What is different about generative AI and companies like OpenAI is the speed of change in these industries. There are few products of any kind that have grown as fast as generative AI. In this vein, I want to reflect on what I see are the three likely disruptions within the disruption that could impact the use of ChatGPT in the one place that is most vulnerable to the overall generative AI revolution: higher education.

  1. Unforced or force implosion or collapse of generative AI companies. This nearly happened in the lead up to Thanksgiving with OpenAI. Altman was fired and then nearly 700 of the 780 employees threatened to leave. Although still shrouded in some mystery the near implosion of OpenAI has roots not in cold hard capitalist business logic but in Silicon Valley religious philosophy called effective altruism. This is a real wildcard here. By all accounts the previous board was in an ethical conflict over the pace of commercialization of ChatGPT, led by Altman.

  2. Standard boom and bust, acquisition, or bankruptcy. This is always a problem for capitalist enterprises and whole industries and the users who depend on them. It is early days and the business model for generative AI is still being figured out. The old tech problem of growing too fast with free tiers or restricting growth with paid tiers is in play. It takes an unbelievable amount of expensive server power to run ChatGPT and costs about $700,000 per day to run. It’s quite likely that there will be mergers and acquisitions. Keep an eye on this dynamic and we’ve written about its equity implications elsewhere.

  3. Government regulation to ameliorate political and social effects, regulate competition, and/or protect national interest. It seems entirely plausible that the US government will craft regulation and law to align the interests of the state (and different publics) with regards to the production and use of generative AI. In 1996, Congress and President Clinton passed the Telecommunications Act that re-regulated the telecommunications industry after 60 years and was the first act to include the internet in telecommunications law. The internet was the generative AI of the 1990s. It is also reasonable to assume that the US government will see a national security angle to regulate or gain some control over generative AI, especially in the face of foreign state and non-state actors, such as China, that are building rival technologies, industries, and markets.

Disruptions have winners and losers and it is not always rival capitalist enterprises that lose. Consumers, users, and the public, the people who have the least amount of control over the development of technology, are often subjected to what feels like never ending technological change, where we see all that is solid melt into air. Ways of life and products that become necessary for ways of life can change in a matter of years and with AI in a matter of months. Universities, and here I mean public universities, are funded to serve the public good. But with the generative AI disruption and the possibility of disruptions within the disruption we need to brace ourselves and plan for continual and rapid change to the meaning of the university and the work that we all do. Unfortunately, faculty, staff, and students have the least control over any levers. Meanwhile, administrators and legislators at the top seem uninterested in high-level engagement to shape the present and future of AI technologies and gain some control over their profound impact on higher education.

Nik Janos

Professor of Sociology at California State University, Chico.

greenspacenotes.org
Previous
Previous

AI Retrofit: Asynchronous and Free For Everyone

Next
Next

Impact of AI in the classroom on Morning Show- Newstalk 93.9 KPAY (radio)