AI Agents and the problems of start-up culture for higher ed
AI agents that can autonomously complete tasks with little-to-no human supervision will compel a shift in institutional partnerships away from startups like OpenAI and Anthropic and toward established enterprise providers like Google and Microsoft. But to understand why we need to understand how we got here.
Last year the CSU, the largest university system in the country, entered into a ground-breaking partnership with OpenAI. The agreement gave premium access to nearly 500,000 students, faculty, and staff members and provided some data securities for users. This was enormous news for our campus and the entire CSU. Nik captured many of our initial thoughts in a blog post from February and we also spoke with Leslie Kennedy and Emily Magruder from the CSU Chancellor’s Office for the podcast on how the deal came together.
This relationship and others like it were smart moves that brought the brand recognition of AI start-ups to college campuses. Survey after survey revealed ChatGPT was already the most popular tool so it made sense from an institutional perspective to go with the application that provided the least amount of friction. Normally big companies or educational organizations prefer established enterprise partnerships, but this was logical because at the time the best models and name recognition were with start-ups.
The bar was also lower because of the kinds of things people used AI for. Building a new syllabus, a study guide, or using AI to summarize documents–these are typically low-risk propositions in terms of data security and access. Users are selecting what data to give these systems and then running a discrete set of tasks. In this way, existing institutional data rules covered almost everything a user might do.
Then came the coding agents. The definition of an AI agent is contested, but here we are taking it to mean an AI program that can operate autonomously in the completion of complex tasks. We are specifically interested in agents that write and compile computer code. The launch of Claude Code, and then Co-Work, and also ChatGPT Codex along with other side-projects like the autonomous system OpenClaw moved us into the agentic era overnight. We are setting aside the implications of this for teaching and learning (we do have a podcast episode about this) and focusing on the data security piece for the moment. With credentials, agents can access university systems and privileged data. This is a huge productivity unlock with implications for every part of what we do. We previously covered some of these hypothetical possibilities for advising, but you can imagine them for every task we do that involves searching through databases for information and then compiling it for forms and analysis. There is also an enormous information security risk. Depending on how existing user agreements are written this may violate them or it may be fine. A local agent may be just as secure as an individual logging in or it may inadvertently expose secure data across a network. The agents themselves can be unpredictable.
Start-ups are ill-equipped to help institutions navigate this new challenge. Capabilities are sometimes turned on or off by the vendor without notification. Bluntly, the labs do not always know the full capabilities of the models they are producing and introducing into secure environments. Start-ups are also changing rules in real time. Consider OpenAI shifting to a for-profit company or Anthropic in conflict with the US government which imperils their entire business operation.
The moment models are interacting with secure data inside our systems whether it be student data or cutting edge research, universities will need enterprise partners with experience and documentation about data security. We will also need help. Most of our data security rules were written for a world in which only humans could access systems. If an agent on the computer of an administrator accesses data using their credentials is that a data breach or is that a secure delegation by the user? Our rule structures don’t account for this and our technical expertise only goes so far.
Our prediction is that the agentic era will mark a shift away from start-up partnerships and towards established enterprise collaborations. Even if the models are not as good we have seen time-and-time again that large institutions value security and liability mitigation more than capability. For Chief Information Officers and other decision makers there are two paths. First, if an institution is already in partnership with a start-up the time to pressure for clarification, predictability, and security is now. It is possible OpenAI and Anthropic will be able to bring their institutional partnerships up to speed, but only if they feel existing and future relationships are at stake. Second, move to a traditional enterprise partner as soon as possible. If your institution does not currently have an industry partner, all the better. Either way, CIOs have to find a path away from the start-up ethos of AI labs now that the models aren’t just chat, but are interacting with valuable data and the trust of the users.