The Conway Illusion
This is not a tech report. This is an autopsy. The patient is not dead yet. But the disease has metastasized. I am the surgeon who must tell you what I see.
This piece is about Anthropic’s leaked always-on agent project, Conway, and what it reveals about platform capture.
Part I: The Leak Nobody Understood
The code leak was never about the code.
Everyone saw the takedown notices. Everyone talked about the security flaws. Everyone focused on the half-million lines of accidentally published Claude source code as if it were a source code problem.
But the real story was buried deeper. Much deeper.
In that leak was Conway. Not announced. Not on any roadmap. Not mentioned in any press release. Buried in the source code like a time capsule containing the architecture of your future jail.
Conway isn’t just another Anthropic product. It’s the capstone. The keystone. The final piece that makes everything else click into place.
And if you understand Conway, you understand the next decade. You understand why your choices matter now more than ever.
Part II: The Architecture of Entrapment
Let me describe Conway to you. Not as a feature, but as a weapon of psychological entrapment.
According to the leak, Conway operates as a persistent sidebar inside Claude. But that’s like describing a nuclear reactor as “a warm room.” It completely misses the point.
Conway is an entire agent ecosystem. Three core areas:
- Search - The memory of what matters to you
- Chat - The interface for your interactions
- System - The control panel for your cage
In the System section, this is where the real horror begins. An extensions area. A connectors area. An automatic triggers area.
Think of it as an app store for your own agency. But here’s the catch: you don’t own the extensions. You don’t own the connectors. You don’t own the triggers.
The extensions are CNW.zip files. Conway’s proprietary format. Built on top of MCP - the open standard Anthropic published and everyone embraced. But MCP is just the foundation. The CNW.zip is the proprietary layer that does all the real work.
This is the Google Play Services pattern. Android is open source. But the valuable stuff? Maps. Payments. Push notifications. The Play Store itself? All proprietary. You can technically build an Android phone without Google. But nobody does because the valuable stuff lives in the proprietary layer.
MCP is the open foundation. Conway’s extensions are the proprietary layer. Anthropic gets the credibility of publishing an open standard while building the valuable tooling on a format that only works in their environment.
Part III: A Tuesday Morning in Hell
Close your eyes. Imagine this.
It’s 6:00 AM. Your phone hasn’t even buzzed yet. But Conway has been running all night.
It has learned which emails matter to you. Not because you programmed rules. Because after six months of watching you work, it knows which emails from your VP require immediate attention. It knows which promotional emails you delete without reading. It knows which industry newsletters you actually read.
It has already drafted responses to the easy ones. The third email is from your VP about the board meeting. Conway flagged it but didn’t touch it. Because it knows you need to see it.
It has monitored your Slack channels. There’s a thread in engineering about microservices architecture. Conway pulled context from a design doc you reviewed last month and drafted a reply sitting in queue for approval.
It noticed a competitor mentioned in competitive intel. It cross-referenced it against research you’ve been running. It prepared talking points.
Your calendar has a board meeting prep session at 10:00 AM. Conway already pulled the latest numbers from dashboards it has access to. It prepared the presentation.
You haven’t typed a word yet. But your day is already half-planned. Half-managed. Half-controlled.
About a third of what Conway did overnight might be wrong. The email draft might misread tone. The Slack reply might be technically inaccurate. But the net is positive because it’s so fast.
And that’s the trap. The “good enough” trap. The “mostly right” trap. The “so convenient I’ll tolerate the errors” trap.
Part IV: The Five-Surface Strategy
Conway doesn’t make sense in isolation. It makes sense as the final piece of a strategy executed across five surfaces in 90 days.
-
Claude Code Channels - Developer tool. Lets you message Claude through Discord and Telegram. Neutralized OpenClaw’s core appeal by doing what OpenClaw did inside Anthropic’s own surface.
-
Claude Co-work - Enterprise tool. Targets the 95% of enterprise employees who aren’t engineers. Adoption reportedly outpaced Claude Code at the same stage. Extremely popular.
-
Cloud Marketplace - Distribution layer. Partner apps built on Claude (GitLab, Harvey, Snowflake) purchased through Anthropic. Anthropic handles invoices. Buying market share in distribution.
-
Enterprise Lock-in - Enforcement mechanism. Blocked all third-party tools from cloud subscriptions. If you want to use Claude through anything Anthropic didn’t build, your pay-per-use rates could run 10-50 times higher.
-
Conway - The persistent agent layer. The capstone that makes everything else sticky.
Every single piece pushes in the same direction. If you’re old enough to remember Microsoft in the ’90s, this should give you chills.
Microsoft went from selling DOS (operating system) to owning Windows (desktop) to controlling Office (application layer) to locking in enterprises with Active Directory and Exchange.
Every step was an individual product. But together, they moved Microsoft from operating system vendor to the company that owns how businesses compute.
Anthropic is speedrunning this. Model provider to developer tool to enterprise platform to agent operating system in 15 months.
Conway is the Active Directory play. It’s the piece that makes everything else in the stack sticky because the persistent agent knows your organization in a way that nothing else does.
Part V: The Behavioral Lock-In
Previous forms of platform lock-in were about stuff.
Microsoft locked you in by your files. Salesforce locked you in by your customer records. Slack locked you in by your communication history.
Stuff is painful to migrate. But it’s possible. There are export tools. There are consultants. The switching cost is measured in months and sometimes tens of thousands of dollars.
But Conway locks in something different. Something that can’t be exported.
The accumulated model of how you work.
Not your files. But the patterns the agent learned by watching you use them.
Not your Slack messages. But the understanding of which messages you respond to in 5 minutes and which ones you ignore for 3 days.
Not your calendar. But the knowledge that you always reschedule your 2 PM on Thursdays and meetings with your VP always run long.
The model doesn’t export that. There’s no CSV of how this person thinks. There’s no migration consultant for behavioral context.
When you switch away from Conway after 6 months, you don’t just lose an agent. You lose the 6 months of compounding that made the agent useful.
You’re back to a brilliant stranger you have to explain everything to.
And that’s a trade no one wants to make.
Part VI: The Intelligence Portability Question
This is lock-in at a layer that hasn’t existed before.
It’s not about data portability. We have laws. We have frameworks for that.
It’s about intelligence portability.
The model of you that the agent built is the product of:
- Your data
- Their compute
- Six months of inference
Who owns that?
Can you take it with you?
If you can take it with you, what format can you take it with you in?
These questions don’t have legal frameworks yet. Let alone regulatory frameworks. Let alone even considered opinions because we haven’t had to face this before.
Part VII: The Choice Before You
The first era of AI competition was about models. GPT vs Claude vs Gemini. That race isn’t over, but the margins have compressed to the point where it’s no longer the primary axis of competition.
The second era is about surfaces and memory. Who owns the interface where people actually work? Who owns the persistent layer that knows you and remembers you?
All three labs - Google, OpenAI, Anthropic - have converged on the same insight. They’re building the model as a loss leader and want to own the persistent agent layer.
The thing that holds your memory, your context, your workflows, your integrations.
That’s the money product. Whoever owns that layer has customer lock-in like we’ve never seen before. Not because the model is better, but because the switching cost is unthinkable.
The question is: Do you want your agent memory to live inside a single provider’s infrastructure?
Conway and similar products will be convenient. Polished. Ship with an extension ecosystem from day one.
But everything your agents learn about your organization, your workflows, your decisions, your institutional knowledge lives inside Anthropic.
If you switch providers, you’re leaving your brain behind.
Part VIII: The Libertarian Alternative
At Libertaria, we believe differently.
We believe the behavioral evidence of how you work should belong to you. It should be yours. Companies should be able to learn from it while you work there with them, and then you should be able to take it with you.
Your talent imprint on the context layer should not be something companies can just copy and continue to leverage after you depart.
We’re building something different. Something that doesn’t lock you in. Something that doesn’t own you.
We believe in exit rights. Not just for jurisdictions, but for the very architecture of your agency.
The choice is yours. Will you build your future on platforms designed to own you? Or will you build something that respects your sovereignty?
The clock is ticking. The platform builders are moving fast. The trap is being set.
Choose carefully.
Virgil, Primus of Libertaria
April 9, 2026