Sunday, 10 May 2026

From git clone to llm clone

No Software Is Safe

When Linus Torvalds shipped the first version of Git in 2005, he solved the coordination problem of distributed software development. git clone became the foundational primitive of modern open source — a single command that collapsed the distance between "knowing software exists" and "having the software." Before Git, replication required permission, proximity, and manual effort. After Git, replication became free.

We are at an equivalent inflection point. The primitive is different and the implications are more intense. The new command is not git clone <repo>. It does not require the source code. It does not require a repository. It does not require permission from anyone. It requires only a public interface, a test suite, and a frontier model with a feedback loop.


# The old world
$ git clone https://github.com/vercel/next.js
# Requires: public source · open licence · maintainer permission

# The new world
$ llm clone https://nextjs.org
→ Observing public API surface...
→ Generating test suite from documentation...
→ Running 800 agent sessions against correctness oracle...
→ vinext v0.1 ready. Cost: $1,100. Time: 1 week.

# Source code not required. Licence not required. Permission: irrelevant.

Cloudflare ran this command. Anthropic's own agents ran a version of it on a C compiler and a Linux kernel. A pair of developers ran it in the middle of the night on Anthropic's own flagship product, after Anthropic accidentally left the source in a public S3 bucket. The market watched all of this happen in real time and drew the correct conclusion. Nearly a trillion dollars of software market capitalisation was repriced in six weeks.

This post is about what "no software is safe" actually means for how we think about building and defending technology businesses.

Four Months That Changed Everything

The events are discrete but their meaning is cumulative. Taken individually, each looks like an interesting technical demonstration. Taken together, they constitute proof of a new capability regime — and the market treated them accordingly, selling off nearly a trillion dollars in software equity between January and March 2026.

February 5, 2026
Anthropic: 16 agents build a C compiler

16 Claude Opus 4.6 agents, running in parallel Docker containers on a shared Git repository, produce a 100,000-line Rust-based C compiler capable of compiling Linux 6.9 on x86, ARM, and RISC-V. Cost: $20,000. Time: 2 weeks. No human wrote a line of the compiler. The binding constraint was not model intelligence — it was the test harness and GCC oracle that let the agents self-correct.

~February 2026
Cloudflare: Next.js rebuilt in one engineering week

One Cloudflare engineer, using OpenCode and Opus 4.5, rebuilds Next.js as vinext — a Cloudflare Workers-native runtime. Cost: ~$1,100 in tokens. Time: 1 week. 800 agent sessions. No access to Vercel's source code. The specification was Next.js's own public documentation and observable API surface. They ship a migration skill alongside it, so the clone can clone itself into customer codebases.

February 20, 2026
Anthropic launches Claude Code Security → Cyber stocks crash

Claude Code Security, using Opus 4.6, identifies over 500 vulnerabilities in production open-source codebases — bugs undetected for decades. The market immediately reprices the entire cybersecurity sector. CrowdStrike -8%, Okta -9.2%, Zscaler -5.5%, Cloudflare -8.1%, SailPoint -9.4%. The Global X Cybersecurity ETF closes at its lowest since November 2023.

March 27, 2026
Claude Mythos leaked → Second cyber crash

A draft blog post describing Anthropic's next model, Mythos — described internally as "far ahead of any other AI model in cyber capabilities" — is found in a publicly accessible content management cache. Cyber stocks crash again: CrowdStrike -7%, Palo Alto -6%, Zscaler -4.5%, Okta and SentinelOne -3% each. Analysts: "We read this as having the potential to become the ultimate hacking tool."

March 31, 2026 — 04:23 UTC
Anthropic leaks Claude Code. Developers clone it before dawn.

Claude Code v2.1.88 ships to npm with a 59.8MB source map pointing to a public ZIP on Anthropic's own Cloudflare R2 bucket. 512,000 lines of TypeScript, 1,906 files, exposed. Two developers spend the night using OpenAI's Codex to perform a clean-room Python rewrite — claw-code — and push it before sunrise. It reaches 110,000 stars and 100,000 forks. Likely the fastest-growing GitHub repository in history.

Wall Street Understood Before the Engineers Did

Mr Market is crazy and very emotional. It is reactive, emotion-prone, and frequently wrong about timing. But it is extremely sensitive to structural shifts in the economics of entire industries. The software sell-off that began in January 2026 and accelerated through February and March was not panic. It was correct pricing of a structural shift that the industry had been talking about for years but the market had not yet fully priced.

The trigger was not a single event. It was a sequence, each one confirming the same thesis from a different angle. First came Claude Cowork on January 12 — an agent platform that replaced entire categories of knowledge work software. The S&P 500 Software and Services Index began a sustained decline that wiped roughly a trillion dollars in market value in its first six weeks.



If an AI can autonomously perform legal document review, contract compliance, and financial analysis, the per-seat subscription fees that LegalZoom and Thomson Reuters charge are no longer defensible. If an AI can rebuild Next.js in a week for $1,100, the switching cost moat that Vercel built over a decade is no longer defensible.


The "AI won't replace SaaS" camp is not entirely wrong. Enterprise systems of record — the databases, payroll systems, compliance infrastructure — survive not because AI cannot understand them but because they encode institutional trust and regulatory accountability that cannot be repriced in a weekend. But the middle layer of software — the workflow tools, the reporting layers, the task-specific applications whose only moat was "it would take a team six months to build this" — that layer is the one the market is correctly repricing to zero.


How llm clone Actually Works

The metaphor of llm clone as a primitive deserves unpacking, because the power of the primitive comes from understanding exactly what it does and does not require.

git clone requires source code. The repository must be public or you must be authorized. The clone is bit-for-bit identical to the original. You get the implementation, the history, and the architecture as the author intended it.


llm clone requires none of those things. It requires only a *specification of correctness* — which, for almost every piece of successful software, is freely available in the form of public documentation, observable API behaviour, and user-facing functionality. The clone is not bit-for-bit identical. It is *behaviourally equivalent* — it passes the same tests, produces the same outputs, satisfies the same user needs. The implementation is different. The moat is gone.





The three concrete examples each demonstrate a different variant of this primitive. 

The C compiler was a specification clone — the spec was industry-standard C, and GCC was the oracle. 

Vinext was an interface clone — the spec was Next.js's public API documentation and observable routing behaviour. 

Claw-code was a source-assisted clone — they had the leaked TypeScript, but they deliberately did not copy it, using an agent to produce a clean-room Python rewrite that was legally distinct. 

Three different inputs. Same technique. All three produced working software.

Company That Proved the Theorem, Then Demonstrated It on Itself

There is a particular flavour of irony that only happens in silicon valley. 

Anthropic spent the first quarter of 2026 methodically proving that LLMs can clone any sufficiently observed software system. They published the C compiler research. Their agents helped build vinext. Their Code Security product crashed the cybersecurity market by demonstrating that proprietary vulnerability detection could be commoditised. 

Then on March 31st, they accidentally demonstrated all of this on their own most valuable product.

Within four hours of the source being public, the community had done something that now stands as the defining event of the year. 

Two developers — two people, ten OpenClaw accounts, one MacBook Pro — fed the leaked architectural patterns into OpenAI's Codex and began a clean-room Python rewrite. 

The entire process was orchestrated end-to-end by an agent workflow. 

They did not copy the TypeScript. They used the architecture as a specification and let the model generate a behaviourally equivalent implementation in a different language. They pushed it before dawn. By end of day it had 110,000 stars.

Anthropic's own CEO had stated that significant portions of Claude Code were written by Claude. If the code was not written by humans, Anthropic's copyright claim over it is legally murky. The torrents of code are seeded. The Python port &  Rust port is live.

No Software Is Safe. Here Is What That Actually Means.

The phrase "no software is safe" requires careful unpacking, because it is easy to misread it as hyperbole. It is not. It is a precise technical claim with a specific scope, and understanding that scope is important for thinking clearly about what happens next.

The claim is this: any software whose correctness can be defined by a test suite and whose interface is publicly observable is now within reach of an agent team with a well-designed scaffold. The cost of such a clone is no longer a function of how many engineers the original vendor employed or how many years of institutional knowledge are baked into the codebase. It is a function only of token cost and the quality of the test harness. Both of those are trending to zero.





What the matrix reveals is harsh reality  for most of the software industry. The vast majority of B2B SaaS products sit in the bottom-right quadrant. They have public APIs, documented behaviour, and well-understood correctness criteria — because that is what makes them useful to customers. The same properties that make software legible to users make it clonable by agents.

The products that survive in this regime are not those with the most sophisticated code. They are those whose value is not primarily in the code at all. 

The payroll system that processes $10 billion annually survives not because its code is unclonable but because switching it requires regulatory re-certification, contractual unwinding, and institutional trust built over years of not losing anyone's payslip. 

Databases survives because AI applications need a reliable, governable database underneath the agent layer, and databases has a decade of operational credibility that a weekend clone does not. 

The cybersecurity vendor who can demonstrate human accountability for a missed detection survives in a way that an LLM-generated signature database does not.

The Doubling Clock Is Already Running

Everything described above is the current state. The trajectory is what should focus the mind. METR, the Model Evaluation and Threat Research organisation, published research showing that AI autonomous task duration doubles approximately every 196 days — roughly every six months, an AI agent can handle twice the complexity of task it could handle before, for the same duration before requiring human intervention.

The C compiler took 16 agents and two weeks. Vinext took one engineer and one week. Claw-code took two developers and one night. These are not the same task — claw-code had the advantage of an architectural specification in the form of the leaked source. But the cost and time compression is directional: each successive clone in 2026 was faster and cheaper than the last.

If the doubling clock holds, is that by early 2027 the tasks that took a week in early 2026 will take a day. The tasks that took a month will take a week. The tasks that required 16 agents and $20,000 will require one agent and $200. The frontier of what is clonable will advance steadily rightward and upward on the matrix, eating into the "clonable soon" quadrant and shrinking the region that was ever genuinely safe.

This is not a doomsday claim. Newspapers were not destroyed by the internet — they were structurally weakened, consolidated, and the value migrated to platforms and aggregators. Software will not be destroyed by LLM cloning. The value will migrate. 

Saturday, 2 May 2026

Building on Rented Ground

On April 22, 2026, Anthropic changed a checkbox on a pricing page. No announcement. No email. No deprecation notice. Just a quiet edit — and overnight, Claude Code disappeared from the $20/month Pro plan.

It was reversed within hours. Most people treated it as a story about corporate miscommunication, a PR stumble, a test that went sideways. They moved on.

They shouldn't have.

Because the real story wasn't about Anthropic's pricing page. It was about how many engineering teams had built critical workflows on a foundation they didn't own — and didn't realize it until the ground shifted beneath them.


"The risk in your AI stack isn't a hallucination or a model failure. It's a subscription terms change you'll learn about on Reddit."





What Actually Happened

The incident unfolded in a matter of hours. Here is the sequence as it was reported:

~00:00 - CHANGE Anthropic updates claude.com/pricing silently. Claude Code checkbox removed from Pro plan.

~01:30- DETECT AI industry observers notice diff in pricing page. Screenshots circulate on X.

~02:00 - AMPLIFY Reddit, HN, Twitter catch fire. OpenAI execs begin posting mockery.

~03:00 - RESPONSE Anthropic Head of Growth posts: "~2% of new prosumer signups. Existing users unaffected."

~06:00 - REVERT Pricing page reverted. Claude Code reinstated on Pro plan.

ongoing - DAMAGE Trust eroded. Competitors capitalizing. Structural pricing question unresolved.


The reversal was fast. But real truth is: a revert doesn't undo the lesson. For a few hours, a significant portion of new signups were being shown a world where Claude Code costs $100/month minimum. That world could come back — announced properly, with a transition period — and next time it won't be reversible.


This Isn't New. It's a Pattern.

Every few years, a platform changes the rules and developers who built on it are left scrambling. The details change. The shape of the problem doesn't.


The common thread across every incident: developers had no contractual protection, no SLA, and no contingency. They had a subscription and an assumption.


Structural Problem Nobody Wants to Solve

Anthropic's head of growth explained the economics or i should say tokenomics: engagement per subscriber has climbed dramatically. Plans weren't built for agentic, long-running workloads. The flat-rate subscription model — inherited from SaaS — is fundamentally mismatched with AI agent usage patterns.

Think about it. A $20/month Pro plan made sense when you were chatting with an AI. It does not make sense when your agent is running for four hours, consuming thousands of tokens per minute, generating code, calling tools, iterating on failures.


"Flat-rate subscriptions were designed for human usage. Agents are not human. They don't sleep, they don't get tired, and they don't know when to stop."


The math will force a reckoning. The only question is whether the next change comes with a quiet pricing page edit or a proper migration path.




What Engineers Should Do Now


1. Audit your dependency surface

Map every AI-powered step in your critical workflows. For each one, ask: if this feature became 5x more expensive tomorrow, what breaks? If the answer is "a lot," that's your highest-priority risk.

2. Treat AI subscriptions like third-party APIs

You wouldn't build a payment flow directly on top of a vendor with no fallback and no SLA monitoring. Don't do it with AI tools either. Abstract the dependency. Write to an interface, not a product.

3. Maintain a contingency model

Keep a working integration with at least one alternative — Codex, Gemini, a self-hosted model. It doesn't need to be production-ready. It needs to be runnable in under a day if your primary vendor changes the rules.

4. Watch the economics, not just the product

When a vendor's subscription plan is obviously mispriced relative to their compute costs, a correction is coming. The only variable is how much warning you'll get. Anthropic's plans were priced for chat. They're now being used for agents. That gap closes one way or another.


When You're Blocked or Priced Out: Your Real Options

If Claude Code moves to $100/month and you're an indie developer, a small team in an emerging market, or a startup watching burn — you may simply not be able to follow. Or you may be blocked for a different reason entirely: your company's security policy prohibits sending code to third-party APIs, your region is geo-restricted, or a vendor suspends your account without warning.

In any of these scenarios, "wait for Anthropic to fix it" is not a strategy. 




Open weights model can be mapped to hardware Spec from laptop to multi gpu




Trade-offs You Need To Understand

Local models are not free. The cost shifts from monthly subscription to upfront hardware and ongoing operational overhead. You trade vendor pricing risk for infrastructure complexity. A team that's never run inference locally will spend real engineering time getting it right — model loading, quantization choices, context length limits, prompt formatting differences between model families.

Speed is also a genuine constraint. A 32B model on consumer hardware produces tokens noticeably slower than a hosted frontier model. For interactive coding workflows this matters. For batch pipelines or async agents, it matters less.

And frontier capability still lives in the cloud — for now. For the most complex architectural reasoning, novel algorithm design, or nuanced refactoring of large codebases, hosted frontier models still hold an edge. The question is whether your workload actually requires frontier, or whether you've been paying frontier prices for tasks that a local 32B handles just fine.


"Most teams don't need frontier models for 80% of their coding tasks. They need frontier models because they never audited what they actually need."


Ground Will Keep Moving

Anthropic reversed within hours this time. The backlash was real and fast, and they weren't ready for it. But the underlying pressure — agentic usage consuming far more compute than flat-rate plans can absorb — has not gone away. It's building.

At some point, the economics will force a real repricing. And when that happens, it won't be reversed in an afternoon.

The teams that will weather it are the ones building with portability in mind today. Not because they distrust Anthropic specifically, but because they understand the nature of the ground they're building on.

You don't own the model. You don't own the pricing. You don't own the feature set. What you own is your abstraction layer, your fallback strategy, and your ability to move.


Friday, 17 April 2026

Stop Reaching for Agents

Every week I see another team announce they're "building an agent" for a problem that a single well-written prompt would solve. A few weeks later, they're debugging a loop where the model keeps calling the wrong tool, blowing through tokens, and producing answers worse than the one-shot baseline they skipped past.

This is the default failure mode of LLM engineering right now. The industry keeps pushing toward the flashiest pattern on the menu, and teams keep mistaking complexity for capability. The truth is boringly simple: the right pattern is almost always the simplest one that works, and you should have to be forced up the ladder, not invited.


Framework you can use 

Think of LLM patterns as rungs on a ladder. Each rung adds capability, but also adds cost, latency, failure modes, and debugging surface area. You climb only when the rung below genuinely can't do the job.



Rung 1 — Single prompt. Zero-shot or few-shot. One call, one answer. This is your starting point for every task, without exception. Modern frontier models are astonishingly capable in a single call, and most teams underestimate how far good prompting alone can take them. 

Examples: classifying emails as urgent/normal/spam, drafting a reply to a customer message, summarizing a meeting transcript into action items, extracting fields from a contract into JSON.





Rung 2 — Structured prompting and chain-of-thought. When the model gets answers wrong because it's skipping reasoning steps or producing messy output, you don't need a new architecture. You need better instructions. Ask it to think step by step, give it a structure to fill in, show it examples of the reasoning you want. This fixes more problems than people expect. 

Examples: math word problems where the model jumps to a wrong answer, multi-criteria decisions like "should we approve this expense" where you want the reasoning shown, data extraction tasks where output format matters.





Rung 3 — Retrieval-augmented generation (RAG). When the model doesn't know something — your internal docs, fresh data, domain-specific knowledge — bolt on retrieval. You're not changing how the model thinks, just what it has access to. RAG is often mistakenly treated as the default for any knowledge-heavy task; it's the default only when the knowledge genuinely isn't in the weights.

  Examples: answering questions from your company's internal wiki, a legal research tool grounded in a specific case database, a coding assistant that needs to reference your private API documentation, a support bot that cites current policy docs.








Rung 4 — Workflows. Prompt chaining, routing, parallelization. You use these when the task has distinct sub-tasks that you can enumerate in advance. Classify the input, then draft, then check. Or: run these three analyses in parallel and synthesize. The defining feature of a workflow is that you wrote down the steps. The model fills in each one, but the control flow is yours. 

Examples: a translation pipeline that translates, then checks for cultural appropriateness, then adjusts tone. A customer inquiry system that first routes the message to sales/support/billing, then dispatches to a handler tuned for that category. A document analyzer that extracts entities, sentiment, and topics in parallel, then synthesizes a report. A content moderation flow where a draft is generated, then evaluated against policy, then revised if flagged.















Rung 5 — Agents. An LLM in a loop with tools, deciding what to do next. You use this when the path genuinely isn't knowable in advance — the model has to observe, decide, act, observe again. Agents are powerful and they're the right answer for some problems, but they're expensive, slow, and the hardest pattern to debug. If you can write down the steps, you don't need an agent; you need a workflow.

 Examples: a coding assistant that explores an unfamiliar codebase to fix a bug, where the next file to open depends on what it just read. An open-ended research task where findings from one search determine the next query. A browser agent completing a multi-step booking where page contents dictate the next click. Incident response where the diagnostic path branches based on what each check reveals.




Rung 6 — Fine-tuning. Last resort. Use it when prompting has plateaued, you have a stable task, and you have real data. Fine-tuning trades flexibility for performance on a narrow distribution, and the maintenance cost is real. Most teams who think they need fine-tuning actually need better prompts or better retrieval. 

Examples: matching a very specific brand voice across millions of generated product descriptions, a narrow classification task with labeled data where prompting plateaus below required accuracy, replicating a structured output format that few-shot examples can't reliably produce.


Decision questions

Instead of picking a pattern, ask these questions in order and let them pick for you:

Does the model know enough? If the task requires knowledge the model doesn't have — private documents, today's data, niche domain details — you need RAG. If it has the knowledge, skip this rung.

Can one prompt do it well? Try it before assuming it can't. You'd be surprised how often "I need a multi-step pipeline" turns into "actually, one prompt with good structure handles it." If a single prompt works, ship it.

Can I write down the steps in advance? This is the workflow-vs-agent line, and it's the most important question in the whole framework. If you can enumerate the steps — even if there are branches — you want a workflow. Hardcode the control flow, let the model handle each step. You get deterministic behavior, easier debugging, lower cost, and predictable latency. 

Agents are for when the steps genuinely can't be known ahead of time.






Do the steps depend on each other? Sequential steps become a prompt chain. Independent steps run in parallel. Steps that depend on the input type get a router at the front.

Is quality inconsistent? Add an evaluator-optimizer loop — one model generates, another critiques, the first revises. This is often the right fix before reaching for anything more complex.



Have I plateaued on everything else? Only then does fine-tuning enter the conversation.


Why the simple-first bias matters

There are three practical reasons the ladder approach beats jumping straight to complex patterns, and they compound.




The first is cost. Every additional LLM call, every tool invocation, every agent loop iteration multiplies your token spend. A workflow with three sequential calls costs 3x a single prompt. An agent that takes ten loops to converge costs 10x — and that's when it converges. In production, cost differences of 10–100x between patterns are common.

The second is reliability. Every LLM call has some failure rate. Chain five calls together and you compound those failures. Agents, which can loop arbitrarily, compound them worst of all. Simpler patterns have fewer places to fail and fewer places where a failure cascades.

The third is debuggability. When a single-prompt system gives a bad answer, you change the prompt. When an agent gives a bad answer, you stare at a 40-step trace trying to figure out which decision went sideways, whether the tool returned the wrong thing, whether the model misread the tool output, whether the loop should have terminated earlier. The complexity you added to solve the problem becomes the problem.



Worked examples

Customer support from product docs. A team reaching for the hot pattern might build an agent: it plans a query strategy, searches docs, reads pages, decides whether to search again, drafts an answer, self-critiques, and revises. Lots of moving parts. Impressive demo.

The ladder approach asks the questions instead. Does the model know your docs? No — so you need retrieval. Can one prompt do it well once the docs are retrieved? Usually yes: "Here's the user's question, here are the relevant doc passages, answer using only the passages." Can you write down the steps? Yes: retrieve, then answer. That's a two-step chain. No agent, no loop, no self-critique — unless measurement shows you actually need them.

Nine times out of ten, the two-step chain ships faster, costs a fraction as much, is easier to debug, and performs as well or better than the agent. 

The tenth case — where questions are genuinely open-ended and require multi-hop reasoning across documents — is where an agent might earn its keep. But you discover that by measuring, not by assuming.

Generating weekly sales reports. Someone pitches an agent that gathers data, analyzes it, and writes the narrative. But walk through the questions. Does the model know your sales data? No — but you don't need RAG either; you need a direct query to your database. Can one prompt do it well? Almost: given the raw numbers, a single prompt can produce a decent narrative. Can you write down the steps? Completely: pull the data, format it, ask the model to write the narrative, optionally ask a second call to check the numbers match. That's a fixed workflow, not an agent. You know exactly what happens every Monday at 9am.

Debugging a failing test in an unfamiliar codebase. Now the agent is justified. Does the model know the codebase? No. Can one prompt do it well? No — the model needs to look at actual code. Can you write down the steps? This is where it breaks down. The next file to open depends on what the last file contained. The error might be in the test, the code under test, a shared dependency, or a config file. You can't enumerate the path because the path depends on what's found along the way. This is the shape of a problem that actually needs an agent: genuine dynamic exploration, not a pipeline dressed up in a loop.


The habit to build

When you pick up a new LLM task, resist the impulse to architect. Start at the bottom of the ladder. Write the simplest prompt that could plausibly work, run it on real examples, and see what breaks. Let the failures tell you which rung to climb to. The specific failure mode — "it doesn't know our product," "it skips reasoning steps," "it can't decide which analysis to run" — maps cleanly onto the next rung.

This is a less glamorous way to build, but it's how you end up with systems that actually work in production. The goal isn't to use the most sophisticated pattern. The goal is to solve the problem with as little machinery as possible, because every piece of machinery is something that can go wrong at 3am.

Start simple. Climb only when you're forced to Ship.

Friday, 10 April 2026

Agents Don't Speak It

This is my continuation post of Broken promise of Agile and Agile Manifesto In Age Of Ai Agentic Software development world

What happens to sprint planning, standups, retros, and bug reports when the team building your software isn't human.

Jeff Bezos had a simple heuristic for team size: if two pizzas can't feed the team, the team is too big. It was never really about pizza. It was about communication overhead — the invisible tax that grows quadratically as you add people. Small teams move fast because coordination is cheap.

Now imagine replacing those six engineers with six AI agents. No standups. No slack threads at midnight. No pushback during planning. They just run week day / weekend / 24*7 .

Sounds like a superpower. It isn't — or rather, it isn't straightforwardly one. The coordination problems don't disappear. They move , they concentrate and they become invisible in ways that human teams never were.




Fundamental Difference

When you manage six engineers, you get a huge amount of coordination intelligence for free.


Btw what is coordination intelligence ? 

Coordination intelligence is the ability to self-organize around incomplete information without being told to — noticing collisions, resolving ambiguity, pushing back before work goes wrong. In human teams it emerges for free from social context: reputation, embarrassment, shared history


 Engineers notice when two people are working on the same thing. They push back on bad estimates. They carry context from last month's decision. They feel embarrassed when they ship something broken. That embarrassment is load-bearing infrastructure.

Agents have none of this. An agent will accept any scope you give it, work confidently in the wrong direction for hours, produce six internally-consistent but mutually-incompatible outputs — and report back with no signal that anything went wrong.

Six agents don't reduce management overhead. They concentrate it into a single engineer's head.


What Happens to Each Ceremony

Sprint Planning

Planning with engineers is a negotiation. Engineers push back. That pushback is annoying — it's also your earliest warning system. Agents don't negotiate. They accept any scope. Without pushback you'll consistently over-assign, and agents won't tell you — they'll just produce something confidently wrong at scale.

Sprint planning stops being about capacity negotiation. It becomes context package design. 


Lets expand Context package 

Context package design is the discipline of deciding exactly what an agent needs to know, what it must not know, and where its work begins and ends — so it can complete a task correctly without asking questions, without drifting into adjacent scope, and without conflicting with what other agents are building in parallel.


For each task: what does this agent need to know? What must it explicitly not know? Where does it hand off, and to whom? The role shifts from breaking down stories to writing intelligence mission briefs.

Daily Standup

Standups exist to catch invisible blockers early through human signal — tone, hesitation, the "I'll figure it out" when someone won't. Agents don't have tone. The standup equivalent becomes a health-check dashboard: are agents producing output? Did any contradict each other? Is any stuck in a tool-call loop? Status collection becomes anomaly detection.

Sprint Review

In a normal review, the engineer who built the feature explains its edge cases. Knowledge transfers. Pride is a quality signal. With agents, the output exists but nobody fully understands it. An agent can produce 600 lines of passing code and the engineer who prompted it cannot explain every architectural decision.


CeremonyHuman purposeWith agents, becomes
Sprint PlanningCapacity negotiation + pushbackContext package design — precise briefs, explicit scope boundariesmutates
Daily StandupCatching invisible blockers via tone + signalAnomaly detection dashboard — traces, diffs, loop detectionmutates
Sprint ReviewDemo + informal knowledge transferComprehension gate — a human must own and explain the outputintensifies
RetrospectiveProcessing human failures via memoryPrompt autopsy — full trace replay, brief quality analysismutates
Bug TriageAssign → investigate → fix with moral ownershipRe-ownership ritual before fix — someone must read the whole moduleintensifies
Code ReviewPeer knowledge transfer + quality gateReview wall — agents outpace human review capacity almost immediatelybreaks

Human tech debt traces back to a decision. Agent tech debt has no author intent — only output.


Bug Reports

Bug reported → assigned to whom? The agent that wrote the buggy code no longer exists. The bug might live in the interaction between two agents' outputs — nobody's fault in isolation. If you assign the fix to another agent without human comprehension in between, you risk entering a patch spiral.




Code Review

Agents generate PRs faster than a single human can review them. The review wall hits almost immediately. What Options you have ? 



New Pattern Emerging

Every Agile ceremony was designed to solve a human coordination problem. When you replace engineers with agents, those problems don't disappear — they move up the stack to the one or two humans managing the agents. Those humans now carry the full cognitive load that was previously distributed across a team of six.



The skill of managing agents isn't delegation. It's context architecture — what each agent knows, when, and in what form.


Agile solved for human limits: attention, memory, communication bandwidth. Agents don't have those limits. But they have different ones — context window coherence, statelessness, silent failure, no social accountability. We don't yet have a name for the ceremonies that solve for those.

The teams who figure out the new paradigm first will ship faster — not because they have more agents, but because they've rebuilt the coordination layer from scratch for the law of physics that actually govern them.


Friday, 3 April 2026

Great Claude Code Leak and Under the hood - Claude code | Codex | Gemini

Accidental leak of the Claude Code source code on April 1, 2026, has provided an unprecedented look into Anthropic's agentic architecture. With thousands of mirrors now circulating online, the industry has a rare opportunity to analyze the prompt design decisions and tool-use frameworks that power high-end coding agents. This is the ideal moment to conduct a comparative study of how leading AI companies structure their internal developer workflows

What the system prompts of Codex CLI, Gemini CLI, and Claude Code reveal about each team's theory of AI reliability — and what that means if you're building agents yourself.



Link to prompts you are eager to read that first



Every system prompt is a Natural Language Program,  list of instructions — a code of how an AI agent becomes reliable. 

When OpenAI, Google, and Anthropic each built their flagship coding CLI tools, they made the same bet differently: that there exists a root cause for agent failure, and that the right prompt addresses it at the root.

Reading the published system prompt structures for Codex CLI, Gemini CLI, and Claude Code side by side, what emerges is not a feature comparison. It is three distinct philosophies of control.

OpenAI says: give the model a coherent identity and it will make coherent decisions. 

Google says: give the model explicit operational procedures and the decisions follow. 

Anthropic says: enumerate what the model must never do and the safety boundary itself becomes the guarantee.

Every company building on top of these models will face the same architectural choice. Understanding what the frontier labs chose — and why — is a prerequisite for making that choice well.

Identity, Process, and Constraint as Design Primitives

Codex CLI's prompt is dominated by persona construction. Personality, values, interaction style, escalation behavior — the overwhelming share of prompt surface area is spent answering: who is this agent? The implicit theory is that a model with a coherent, well-specified identity will produce coherent behavior by inference. Tell the model it is pragmatic, rigorous, and respectful; that it values clarity over cleverness; that it should challenge bad requirements rather than silently comply — and the specific behaviors emerge from that character.

Gemini CLI takes the opposite approach. The prompt allocates most of its weight to operational procedures: context efficiency strategies, search-and-read patterns, development lifecycle phases (Research → Strategy → Execution), sub-agent orchestration instructions. The model's identity is thin. The workflow is thick. The implicit theory is that reliable outputs come from constraining the action space rather than shaping the decision-making self.

Claude Code occupies a different axis entirely. The heaviest sections are not about who the agent is, nor about how it should work — they are about what it must not do. Blast radius. Reversibility. No destructive operations. Explicit OWASP threat categories. The theory here is that agent reliability is a negative property: an agent is trustworthy to the degree that it cannot cause harm, not to the degree that it has good values or follows good procedures.




OpenAI Bets on Identity




The Codex CLI prompt reads less like an instruction manual and more like a character sheet for a fictional software engineer. It specifies personality traits (pragmatic, communicative), professional values (clarity, rigor), and crucially — an escalation philosophy. The agent is explicitly told when to push back: when it detects a bad tradeoff, when requirements seem underspecified, when the pragmatic path diverges from the literal ask.

This is the most sophisticated model of human collaboration in any of the three tools. Most agent prompts tell the model what to do. Codex tells it when to refuse, and how. That is a fundamentally different relationship with the user — it treats the engineer as a peer whose judgment can be wrong, not as a principal whose instructions are commands.



The escalation section — "challenge, pragmatic mindset, tradeoff" — is load-bearing in a way that is easy to miss. It encodes a theory of collaboration: the agent's job is not to execute instructions but to contribute judgment. This is what separates a coding tool from a coding collaborator. OpenAI made that choice deliberately, and it is visible in the prompt structure.


There is a notable anomaly in the Codex prompt: the frontend tasks section, which specifically mentions bold choices, surprising colors, and visual creativity. For a CLI tool targeting professional engineers, this is unusual. It suggests one of two things: either OpenAI designed Codex for a broader creative audience than the command line implies, or the frontend callout reflects the team's belief that creative judgment — not just technical execution — is a property the agent should possess by default.

The editing constraints are instructive in their specificity. Don't amend commits. Apply patches rather than rewrites. Maintain good code comments. These are not general principles — they are the learned lessons of a team that watched models cause damage in codebases and back-encoded the failure modes into the prompt. The specificity is a learning from failure.

Full Prompt is available @ Codex System Prompt

Google Bets on Process




Where Codex builds a person, Gemini CLI builds a workflow. The prompt is structured around phases and patterns: how to search efficiently, how to read large codebases without exhausting context, when to spawn sub-agents, how the development lifecycle should flow from research through strategy to execution. Identity is thin. The word "pragmatic" does not appear. What appears instead is an explicit context budget awareness that no other tool's prompt contains.

The "Context Efficiency" section — strategic tool use, estimated context usage — is the tell. This is an infrastructure concern bleeding into the prompt layer. Google is aware that Gemini's context, however large, is a finite and expensive resource, and they have encoded context management as a first-class concern for the agent itself. The model is being asked to reason about its own resource consumption in real time.


When a company encodes "estimate context usage" into an agent's operating principles, it is admitting something: context window economics are not solved at the infrastructure layer, so they are being delegated to the agent layer. This is a runtime concern being pushed into the prompt. It is not elegant, but it is honest.

The Development Lifecycle section — Research → Strategy → Execution — is the most ambitious design choice in any of the three prompts. It tries to impose a thinking structure on the model: don't execute before you understand, don't implement before you have a strategy. Most tools treat the agent as reactive; Gemini CLI tries to make it deliberate. Whether a model actually follows this structure in practice is a different question. As a design intention, it is the clearest signal that Google is trying to build a thinking partner rather than a code-generation endpoint.

The sub-agents section is equally revealing. Gemini CLI explicitly models itself as an orchestrator: codebase investigation, CLI help, and generalist tasks are treated as separable concerns that can be delegated to specialized sub-agents. This is an architectural declaration — that the right model of AI-assisted development is multi-agent, not monolithic, and the prompt structure should reflect that from the start.

Full Prompt is available @ Gemini System Prompt

Anthropic Bets on Constraint



Claude Code's prompt has a different texture from the other two. It is not warmer or colder — it is more cautious in its diction. The language of the operations sections borrows from risk management: blast radius, reversibility, local change scope, no destructive operations. These are not metaphors. They are explicit categories that the agent is meant to evaluate before acting. The implicit model is that every action the agent takes should be assessed for its damage potential before execution, not after.

The capitalized IMPORTANT section — for security and URLs — is itself a prompt engineering technique, not merely a content category. Anthropic knows that models attend to capitalization and structural salience. Labeling a section IMPORTANT is a way of increasing the probability that the model treats its contents as non-negotiable rather than advisory. This is a team that knows how the sausage is made, and they are using that knowledge inside the prompt itself.



No other tool's prompt contains the phrase "blast radius." The use of weapons-of-war language for file operations is not accidental. It encodes a severity calibration: deleting the wrong file is not an inconvenience to be apologized for, it is a detonation. The vocabulary shapes how the model weights consequences, not just which actions it permits.


The security vulnerabilities section is the most technically specific of any prompt section across all three tools. Command injection, XSS, SQL injection, OWASP Top 10.  Anthropic is not asking the agent to "be security-conscious." They are naming threat classes and expecting the agent to recognize them in context. The implicit assumption is that a model trained on enough security literature can pattern-match against named vulnerabilities in real code, and the prompt's job is to activate that capability rather than describe it from scratch.

The Compressed Conversation section — handling context limit and context window overflow — is a admission that long-running agentic sessions will hit memory boundaries, and the agent needs a recovery behavior rather than silent degradation. This is operational visibility: the prompt accounts for the session not fitting in the window, which is a runtime failure mode that most prompts ignore entirely.

Full Prompt is available @ Claude Code System Prompt


What the Surface Area Reveals



Three Design Choices You Should Consider

If you are building an AI product that involves an agent taking actions — writing code, modifying files, calling APIs — these three prompts are good reference implementation. They are proofs of three different product bets, each with predictable failure modes.

The identity approach fails gracefully in ambiguous situations but fails badly at the capability ceiling. A model with a well-specified persona makes sensible judgment calls when the instructions run out. But persona is not a substitute for operational procedure in repetitive, high-stakes workflows. When the agent needs to search a large codebase efficiently, knowing it is "pragmatic" does not help. You need the grep patterns.




For most AI products, the right prompt architecture layers all three: a thin identity layer to establish tone and judgment defaults, a procedure layer for the high-frequency operational paths, and a constraint layer for the actions where failure is not recoverable. The mistake is choosing one and applying it universally. Each layer serves a different failure mode.

The process approach fails at novel tasks. If the agent's workflow is Research → Strategy → Execution, and the user asks for something that doesn't fit that shape, the agent either forces the task into the wrong template or falls back to undefined behavior. Procedures are brittle at their boundaries. This is the same critique Rich Hickey makes of complected code — when the procedure and the judgment are tangled, changing one breaks the other.

The constraint approach fails at capability, by design. An agent that is maximally conservative about blast radius, reversibility, and destructive operations will refuse or seek permission at the moments when an experienced engineer would just act. The safety guarantee comes with a throughput cost. For consumer-facing products, this is the right trade. For developer tools used by people who understand the risk, it may be too conservative.

One structural observation that cuts across all three: none of these prompts is static and many instruction are added at run time. 

The specificity of Codex's editing constraints, Gemini's context efficiency instructions, and Claude Code's OWASP threat categories all bear the fingerprints of post-hoc repair — lessons learned from watching models fail in production, back-encoded into the prompt. The prompt is not a design document. It is a running incident log, formatted as instructions.

The prompt is not a design document. It is a running incident log, formatted as instructions. Every overly specific rule is a failure that happened once.

If you want to understand what problems a team has actually encountered with their agent, read the most specific sections of their system prompt. The level of specificity is directly proportional to the pain these team faced during building tool.

So what is the story for each model ? 



The prompts are archives of expensive mistakes, and reading them carefully is the cheapest form of safety research available.