AI Central

AI Central

Becoming Ungovernable

Western authorities are failing across the board to fill the regulatory vacuum in the AI space.

Jordamøn's avatar
Jordamøn
Mar 26, 2026
∙ Paid

The three major Western jurisdictions have each spent the past year working to bring AI under regulatory control. Their starting positions had almost nothing in common: the US had no federal AI law, the EU had the most comprehensive AI regulation in the world, and the UK was midway through a consultation on how copyright should apply to AI training.

Each government has now published or advanced its next policy step, and every one was a retreat. The US deferred, the EU revised, and the UK abandoned. The shared direction reveals more about where Western AI governance stands than any of the three documents does individually.

A framework without a foundation

The White House released its national AI legislative framework last Friday, delivering on a directive from the president’s December executive order that called for a federal AI standard to override state-level regulation. The four-page document organizes its recommendations around six pillars: child safety, community protection, intellectual property, free speech, innovation, and workforce development. The structural argument is that a single federal standard must replace the state-level laws that four states have already enacted, because regulatory fragmentation would undermine American competitiveness against China.

Dozens of copyright lawsuits between AI developers and rights holders are pending in federal court. The framework addresses this conflict by stating that the administration believes that training AI models on copyrighted material does not violate copyright law. In the same passage, it acknowledges that arguments to the contrary exist and defers the question to the judiciary. It also suggests that Congress consider enabling collective licensing frameworks so that rights holders can negotiate compensation from AI providers, but specifies that Congress should not address when or whether such licensing is required.

The preemption provision, the framework’s centerpiece, calls on Congress to override state AI laws that impose undue burdens while preserving state authority over children’s safety, data center siting, and the procurement of AI tools. The boundary between federal and state jurisdiction remains vague enough that the document functions more as a political signal than as a legislative blueprint. Congress has already declined to pass AI preemption twice in the current session, removing it first from the budget reconciliation bill and then from the annual defense authorization.

The AI industry welcomed the framework. Trade groups whose members include Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI praised it as the right approach to maintaining American competitiveness. Watchdog organizations, including Americans for Responsible Innovation, argued that the framework shields AI developers from liability and contains no enforcement mechanism for the protections it describes. The split tracks the framework’s design, which addresses every major area of concern while committing to action in none of them.

Cold feet

The EU AI Act became law in 2024 as the most comprehensive attempt by any jurisdiction to regulate artificial intelligence across risk categories. Before its high-risk provisions have fully taken effect, the EU institutions are already revising it. The Act’s implementation has shown strain at the level of individual provisions, most visibly in the transparency code for AI-generated content, which could not define its own central terms five months before enforcement begins. The Omnibus VII package, one of ten simplification proposals that the Commission has introduced since February 2025, extends that pattern to the Act’s structural level by delaying deadlines, broadening exemptions, and reducing compliance requirements.

The Council adopted its negotiating position on March 13 under the Cypriot presidency, which treated the proposal with declared urgency. The position extends deadlines for high-risk AI system compliance by up to sixteen months, so that the rules take effect only after the Commission confirms that the necessary technical standards exist. The package also broadens regulatory exemptions to include small mid-cap companies that have outgrown SME status, expands the ability to process sensitive personal data for bias detection, and postpones the deadline for national AI regulatory sandboxes to December 2027.

The political impetus traces to the Draghi and Letta reports on European competitiveness, which argued that regulatory burden was eroding the EU’s position in the global technology race. The Budapest declaration of November 2024 called for a “simplification revolution.” The Council reached its negotiating position in weeks, a pace that the original AI Act never approached, and the speed itself signals a political consensus that has shifted from regulation-as-leadership to regulation-as-burden.

The Council did add one new prohibition to the AI Act: a ban on AI systems that generate non-consensual intimate imagery of identifiable real people. The Parliament’s committees endorsed the same measure. The EU’s retreat is selective. Compliance timelines and administrative burdens are negotiable, but categories of clear harm still trigger new restrictions.

Back to the drawing board

The UK Intellectual Property Office published its mandated copyright report last week, the product of a consultation that ran from December 2024 to February 2025 and drew over 11,500 responses. The report fulfilled an obligation under the Data (Use and Access) Act 2025, which required the government to assess the economic impact of several policy options for how copyright law should apply to AI training. The government had favored a broad text-and-data-mining exception that would have permitted AI companies to train on copyrighted material unless rights holders explicitly opted out. That approach is no longer under consideration.

Of those who responded through the government’s online survey, 88% favored strengthening copyright protections. Only 3% supported the government’s preferred option. The creative industries rejected the opt-out model as an erosion of their rights. The AI companies rejected it as too restrictive, on the grounds that an opt-out mechanism would impede access to training data. A proposal that both sides of the dispute found unacceptable left the government with nowhere to stand.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Infogalactic AG · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture