Newsletter 2/1-2/15

Unfixed—The AI Higher Ed Breakdown

From the hosts of Unfixed and authors of Melts Into Air, top AI stories for higher education, every two weeks.

Sign up with your email address to receive our bi-weekly newsletter unpacking the top news stories at the intersection of AI and higher education. 

Week of Feb 1–Feb 15, 2026

1) Faculty policies are shifting from bans to “allowed with boundaries”

A study of thousands of course materials finds highly restrictive “no AI” language is easing in favor of more nuanced, task-specific policies. 

Why this matters for higher education:

  • Transparency and honesty with students becomes even more important if we have different policies for different parts of a course.

  • This is a chance to align assessment with observable thinking: drafts, rationale memos, checkpoints, short oral defenses.

Source:

2) Academic cheating services are turning coercive 

Australia’s higher-ed regulator TEQSA issued a sector alert about aggressive commercial cheating services targeting students online and on campus.

Why this matters for higher education:

  • This reframes integrity as organized fraud + student safety (coercion/blackmail risk), not just “rule-breaking.”

  • Course design can reduce vulnerability by requiring process evidence and pointing students to legitimate supports early.

Source:

3) ByteDance launches Seedance 2.0 — high-quality text-to-video goes mainstream

ByteDance’s Seedance 2.0 produces unified multimodal audio-video generation (text/image/audio/video inputs), and reporting suggests it’s already triggering major copyright backlash.

Why this matters for higher education:

  • Verification and misinformation literacy just got harder: “good enough synthetic video” is now accessible for presentations, persuasion, and misinformation.

  • Video production programs need to reckon with a moving target: when “production value” becomes cheap, curricula may need to shift emphasis toward pre-production (concept), ethics/consent, originality, and editorial judgment—not only technical execution.

Sources:

4) Anthropic raises the transparency bar—and “agent security” becomes urgent (OpenClaw included)

Anthropic released Claude Opus 4.6 alongside a detailed sabotage risk report, while broader coverage keeps highlighting how powerful agents change the security model. In parallel, OpenClaw’s rise is being framed explicitly as a privacy/security risk when agents get broad permissions.

Why this matters for higher education:

  • If faculty/staff start using action-taking agents (email, calendars, files), campuses will need guidance on permissions, data handling, and prompt-injection risks—not just “acceptable use.” This rapid iteration will challenge information security offices. 

  • This also belongs in AI literacy: the risk isn’t only generated text; it’s agents taking actions.

Sources:

5) Real-time agentic coding gets faster 

OpenAI released GPT-5.3-Codex-Spark, optimized for ultra-low-latency coding—part of the shift from “AI that suggests” to “AI that executes.”

Why this matters for higher education:

  • Expect more student work produced via iterative agent workflows (rapid generation + automated revisions), which stresses traditional “final product” grading.

  • For faculty research: faster code agents can accelerate analysis pipelines and streamline workflow, but there are open questions about reproducibility and security, especially if you are not in an enterprise account.

Sources:



Next
Next

"Something big is happening" but maybe not in higher ed