LLMs on Lockdown
Both the US and the EU are determined to enforce government oversight of frontier AI models.
The Trump administration and the European Union both adjusted their AI governance positions last week. Washington moved toward mandatory evaluation of frontier AI models, reversing eighteen months of anti-regulation policy. Brussels pushed back enforcement deadlines for its comprehensive AI Act while preserving its framework for scrutinizing the most powerful systems. The two movements originated from opposite regulatory postures and converged on the same priority.
Back behind bars
The Trump administration entered office in explicit opposition to Biden-era AI oversight. It renamed the AI Safety Institute as the Center for AI Standards and Innovation, removed the word “safety” from the title, and dismissed the founding director. Federal policy centered on deregulation, and the administration attempted to preempt state AI laws through a proposed ten-year moratorium that the Senate rejected 99–1.
Anthropic’s Mythos model changed the calculus. Announced in April, Mythos demonstrated the ability to identify and exploit cybersecurity vulnerabilities at a pace that alarmed national security officials. Anthropic restricted the model to a limited group of organizations through Project Glasswing rather than releasing it publicly. The White House responded by considering mandatory pre-release evaluation for frontier models. National Economic Council Director Kevin Hassett described the proposed framework as comparable to FDA drug approval, envisioning a process that would clear advanced models for public release only after safety testing. The Pentagon had designated Anthropic a supply-chain risk in March after the company refused to loosen safeguards for military use. Multiple federal agencies sought access to the model the administration had tried to sideline.
CAISI announced evaluation agreements with Google DeepMind, Microsoft, and xAI on May 5, building on renegotiated partnerships with OpenAI and Anthropic from the Biden era. The agency reported completing more than 40 evaluations, including assessments of unreleased frontier systems. The institutional infrastructure the administration inherited, restructured, and renamed had begun performing the function for which it was originally designed.
Token concessions
EU negotiators reached agreement on the AI Omnibus at 4:30 a.m. on May 7, concluding a six-month sprint to amend the AI Act before its high-risk system obligations took effect in August. The central change pushed compliance deadlines for high-risk AI systems to December 2027 for standalone applications and August 2028 for AI embedded in regulated products, a response to the practical reality that the relevant technical standards had not been finalized. The deal nearly collapsed a week earlier over the treatment of industrial AI. Germany’s Chancellor Merz personally lobbied to exempt regulated product sectors from the Act entirely, securing backing from France and Italy before negotiators reconvened.


