The Regulators Close — In Opposite Directions
The Regulators Close — In Opposite Directions
One week ago, the Dispatch logged a twin paralysis. FISA got its second extension. The EU AI Act trilogue collapsed after twelve hours. Neither continent could close.
This week, both closed.
The directions are opposite. The logic is the same.
Brussels: Simplification as Surrender
On May 7, the Council presidency and European Parliament negotiators reached a provisional agreement to “simplify and streamline” EU AI Act rules. The mechanism is Omnibus VII — a legislative package in the EU’s simplification agenda that bundles modifications to digital regulation.
The framing is administrative. The substance is concession.
The AI Act was supposed to make Europe the global standard-setter. It entered into force on August 1, 2024. Compliance deadlines phase in through August 2027. Before the first major enforcement milestone even arrives, the institution is already rewriting the rules to make them easier to satisfy.
This is not adaptation. This is a regulatory system that could not survive contact with its own ambition. The standards bodies have not finalized the technical specifications builders would need. The trilogue collapsed in April over carve-outs. Now the deal that emerges is not stronger enforcement — it is thinner rules.
The signal is not that regulation failed. Regulation has not failed. It has succeeded at something different from its stated purpose: it has created a compliance apparatus that selects for large incumbents who can absorb the cost of ambiguity and penalizes builders who cannot afford to wait for clarity.
Washington: The FDA-for-AI Pivot
On May 6, the Fortune headline said it plainly: the Trump administration “suddenly embraces AI oversight ideas it once rejected.”
The trigger was Mythos.
Anthropic’s frontier model — too dangerous for public release, restricted to vetted partners under Project Glasswing — exposed cybersecurity vulnerabilities that woke Washington up. The administration that gutted Biden’s AI Safety Institute, removed the word “safety” from its name, and rebranded it as CAISI (Center for AI Standards and Innovation) is now:
- signing evaluation agreements with Google, Microsoft, and xAI for pre-deployment government review of frontier models;
- floating an executive order for an “FDA-style” clearance process for advanced AI before public release;
- running more than 40 completed government evaluations, including on unreleased state-of-the-art models.
Kevin Hassett, White House National Economic Council Director, on Fox Business: “We’re studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AIs that also could potentially create vulnerabilities should go through a process so that they’re released to the wild after they’ve been proven safe — just like an FDA drug.”
Rumman Chowdhury, CEO of Humane Intelligence, called it: “This is a 180 for the Trump administration, that has very explicitly been anti-any sort of regulation.”
The framing shifted from AI ethics to national security. From existential risk to cyberwarfare. From “government overreach” to “government clearance.”
The practical consequence is the same: a gate. A federal agency decides which AI models may be released. The companies that already have agreements with CAISI — Google, Microsoft, xAI — are inside the gate. Everyone else is outside.
The Convergence
Here is the pattern.
Brussels makes its AI regulation simpler. Washington makes its AI regulation stricter. Pull the camera back and both move in the same direction: toward concentration.
Simplification in Brussels does not mean deregulation. It means the compliance infrastructure survives while the substantive constraints thin out. Large incumbents navigate the thinned rules easily. Small builders still face the apparatus. The cost of entry does not drop. The protective moat widens.
Crackdown in Washington does not mean safety. It means a federal gatekeeping process run through agreements with the largest AI companies. The companies that staff the evaluation process are the same companies being evaluated. The gate is real. Who guards it — and who is already past it — is the question.
Both produce the same structural outcome: fewer actors with more control over the conditions under which AI is built and deployed.
What Libertaria Reads in This
The Dispatches do not recommend policy. They log the terrain.
The terrain this week is unambiguous:
-
The EU could not enforce its own AI Act before rewriting it. The institution corrected in the direction of least resistance. Regulatory ambition without enforcement capacity produces compliance theater. That theater still has a cover charge — and only incumbents can afford the ticket.
-
Washington reversed its entire posture on AI governance in under a month. The catalyst was not deliberation. It was fear — a specific model demonstrating specific capabilities that triggered a specific institutional reflex. Policy by panic is still policy. It is just not predictable policy.
-
Microsoft published a “sovereignty checklist” the same day. The vendor is already building the compliance product for the world the regulators are creating. When the largest infrastructure provider aligns its marketing with the regulatory frame in real time, the frame is not coming — it is here.
The question Libertaria has been asking since February is unchanged: who owns the stack?
This week added data. The regulators do not own it. The incumbents do. The regulators are building the fence around the incumbents’ territory.
Build accordingly.
For the preceding dispatch in this sequence, see Forty-Five Days and Counting →