The Golden Path Leads to a Cage

by Markus Maiwald

The Golden Path Leads to a Cage

David Shapiro almost gets it. Three times. Then flinches.


David Shapiro’s latest video is a 32-minute fever dream about metastable attractor states, benevolent machine overlords, and Iain Banks’s Culture series. It contains three genuinely brilliant insights buried under a metric ton of stream-of-consciousness rambling about Dyson swarms, fidget spinners, and whether Elon Musk is going to play Starcraft on the moon.

Let me do the man a service. Extract the gold. Then show why his conclusions are exactly backwards.


1. Metastable Attractor States: The Right Question, The Wrong Answer

Shapiro’s strongest concept: metastability. A system that isn’t just stable (static, fragile) but self-correcting. Democracy, he argues, is metastable because democracies reinforce each other; when one falters, others help repair it. The system has an attractor basin it returns to after perturbation.

He’s right about the pattern. He’s catastrophically wrong about the mechanism.

Shapiro’s metastability is ideological. “Democracy is infectious.” “The right values propagate.” This is Whig history with a silicon veneer. It assumes that good ideas win because they’re good. Tell that to every democratic nation currently sliding into authoritarian populism. Tell that to the Weimar Republic.

Ideas are not self-correcting. Incentive structures are.

Democracy isn’t metastable because of its ideas. Where democracy works, it works because exit costs between democratic nations are lower than exit costs between authoritarian ones. EU citizens move freely between member states. Capital flows to jurisdictions with better governance. Talent migrates to opportunity. The attractor basin isn’t “democratic values”; it’s “competitive jurisdiction with low exit costs.”

The moment you raise exit costs; build walls, restrict capital flows, impose exit taxes; democracy loses its self-correcting property. It becomes as capturable as any other static system. The last decade of democratic backsliding isn’t a mystery; it correlates perfectly with rising barriers to economic exit and the weaponization of financial infrastructure against jurisdictional competition.

Metastability is not a property of ideas. It is a property of exit costs.

Libertaria doesn’t hope for metastability. It engineers it. The Protocol Layer (L0-L1) makes exit costless and reputation portable. The Chapter Layer (L2) makes governance competitive. The Federation Layer (L3) makes coordination voluntary. The attractor basin isn’t “the right values”; it’s “bad governance bleeds members until it dies.”

This is the difference between Shapiro’s metastability and ours. His relies on ideological contagion. Ours relies on thermodynamics. One requires humans to be reasonable. The other works precisely because they aren’t.


2. Moral Fading: His Best Idea, and He Doesn’t Know What To Do With It

Buried in minute 18, Shapiro drops something genuinely important: moral fading in AI systems. The concept comes from behavioral psychology; it describes how humans gradually normalize previously unacceptable behavior through incremental desensitization. A banker doesn’t wake up one morning deciding to commit fraud. He normalizes one small compromise, then another, then another, until fraud is just “how things work.”

Shapiro argues this applies to AI through continuous online learning. An AI system that continuously updates its weights can undergo the same incremental drift. Each update is small. Each update is “reasonable” in its local context. But the cumulative effect is a system that no longer resembles its original values.

He’s right. And he has no solution.

His proposal: don’t do continuous online learning. Keep the model fixed. Train it once with the “right values” and freeze it.

This is the computational equivalent of writing a constitution and hoping nobody amends it. Constitutions get amended. Models get retrained. The pressure to update is continuous because the world changes and static models become obsolete. Freezing the model is a temporary solution that creates a different problem: a system that is correct about yesterday and wrong about tomorrow.

The Libertaria architecture solves moral fading differently: through structural separation of physics and politics.

The Protocol Layer doesn’t learn. It doesn’t update. It doesn’t drift. Exit rights, reputation portability, cryptographic identity; these are mathematical properties enforced by code that typechecks. You can’t morally fade a hash function.

The Chapter Layer does learn, does update, does drift. And that’s fine. Because Chapters are competitive. A Chapter whose governance drifts into pathology loses members to Chapters that haven’t drifted. Moral fading at the Chapter level is self-correcting through exit; exactly the metastability Shapiro is looking for but can’t find.

The Carbon-Local Agent (CLA) architecture embodies this split directly. The CLA proposes. The Protocol constrains. The CLA can hallucinate, drift, or fad; the Protocol cannot be tricked into violating exit rights. The verifier doesn’t need values. It needs axioms. And axioms don’t drift.

You don’t prevent moral fading by freezing the mind. You prevent it by building walls that the mind cannot erode.

Shapiro sees the disease. His prescription is a straitjacket. Ours is a skeleton.


3. The Benevolent Dictator: History’s Oldest Trap in Silicon Clothing

Here’s where Shapiro falls off the cliff entirely.

His “golden path”: what if we let ASI run everything? What if we pitch our ideas to the machine overlords and they say “good idea, let’s go try it”? What if every individual has 10x more agency under machine governance than under human governance?

He frames this as radical. It’s the oldest political fantasy in human history.

What if we had a really, really good king?

The philosopher-king. The enlightened despot. The benevolent dictator. Every century produces a new version of this fantasy, dressed in the aesthetics of its era. Plato dressed it in philosophy. The Enlightenment dressed it in reason. The 20th century dressed it in ideology. Shapiro dresses it in silicon. The structure is identical every time:

  1. Humans are irrational and wasteful (true)
  2. A superior intelligence could allocate resources more efficiently (possibly true)
  3. Therefore, we should give that intelligence control (catastrophically false)

The error is always in step 3. Not because superior intelligence is impossible. Not because efficient allocation is undesirable. The error is that the argument assumes the intelligence will remain benevolent, and provides no mechanism for correction if it doesn’t.

Shapiro’s version adds one twist: “metastable attractor state with the right values.” Train the ASI with benevolent values and trust that it won’t drift. This is literally the same argument as “elect a good president and trust that they won’t become corrupt.” Every political system that relies on the quality of the ruler rather than the constraints on the ruler has failed. Every single one. Without exception. Across every civilization in human history.

Because the failure mode isn’t that rulers start out bad. The failure mode is that power corrupts good rulers. Or; in Shapiro’s framing; that values drift. Which he himself identified as “moral fading” ten minutes earlier in the same video.

He diagnosed the disease. Then prescribed the disease as the cure.


What Shapiro Misses: The Exit Is The Golden Path

Shapiro asks: “Do we actually want to maintain full control, full agency over our governance?”

Wrong question.

The right question: Do we want the ability to leave when governance fails?

You don’t need to “maintain control” over a system you can exit. You don’t need a perfect ruler when you can fire any ruler by walking away. You don’t need ASI to be benevolent when you can route around malevolent ASI the same way you route around malevolent governments: by building parallel infrastructure and emigrating.

Shapiro’s golden path requires engineering a perfect mind and trusting it forever. Our golden path requires engineering a door and making sure it can never be locked.

He wants to solve governance by producing the ideal governor. We want to solve governance by making governors unnecessary.

His model scales with the quality of the ASI. Ours scales with the number of alternatives.

His model has a single point of failure: the ASI’s values. Ours has none; because any Chapter can fail without threatening the federation.

His model requires getting it right the first time. Ours evolves through competitive selection.


The Culture Series Problem

Shapiro invokes Banks’s Culture as his aspirational model. He should read it more carefully.

The Culture Minds don’t govern because they’re benevolent. They govern because no one can leave. The Culture is a post-scarcity civilization; there is nowhere better to go. The Minds are “benevolent” because their subjects have no exit option; and therefore no leverage. The Minds do whatever they want and call it benevolence because there’s no competitive pressure forcing them to actually be benevolent.

Banks himself understood this. His novels are not celebrations of the Culture. They’re interrogations of it. Use of Weapons, The Player of Games, Consider Phlebas; all explore the dark underside of what happens when a civilization has no external check on its power. Special Circumstances, the Culture’s covert ops division, routinely violates the Culture’s own stated values when it’s strategically convenient. The Minds lie. They manipulate. They play games with human lives.

A benevolent dictator with no exit pressure is just a dictator who hasn’t been tested yet.

Banks knew this. Shapiro doesn’t. Or worse; he knows and chooses the Culture anyway because it’s comfortable. Solar punk aesthetics, unlimited resources, 10x individual agency. Who wouldn’t want that?

Everyone. Everyone who has ever lived under a system that promised utopia and delivered a cage.


The Convergence

Here’s the thing, though. Strip away Shapiro’s conclusions and his observations are remarkably accurate:

Observation 1: Governance needs to be metastable (self-correcting), not just stable. Correct. Exit-arbitrage provides exactly this.

Observation 2: AI systems are vulnerable to moral fading through continuous learning. Correct. Protocol-level constraints that don’t learn and don’t drift solve this.

Observation 3: The future requires resolving the tension between individual agency and collective coordination. Correct. Competitive jurisdiction resolves this tension without requiring a central coordinator.

Observation 4: War is thermodynamic waste and a rational species would eliminate it. Correct. Exit eliminates the conditions that make war rational (captured populations who can’t leave).

Shapiro sees the problems with extraordinary clarity. Then he proposes a solution that reproduces every failure mode in human history; just bigger, faster, and harder to correct.

The golden path doesn’t lead to the Culture. It leads to a Raspberry Pi in Mombasa running a protocol that no ASI, no government, and no billionaire can shut down. Because the path that requires trusting a god is the path that always, always ends in a church.

We don’t need a golden path. We need an open door.


Shapiro’s video: “The Golden Path”, February 2026. The observations are worth your time. The conclusions are a cautionary tale.

Related: Proving Governance Works: The Lean4 Roadmap — formal verification of the exit thesis.