Sunday, 22 March 2026

Induced Demand Loop: Anthropic Sells You the Problem, Then the Solution

Anthropic built Claude Code to write your software. They have done awesome job to make it the most preferred agentic coding tool. It makes sure that you generate best code at first time or with shorter loops.

Now it sells Claude to review what Claude wrote. The snake has found its tail — and this is not an accident.


There is a pattern in business history that feels, the first time you notice it, like a conspiracy. A company creates a category of problem, then creates the solution, then collects rent from the gap between the two. 

Security consultancies who audited the systems they also architected. 

ERP vendors who sold implementation services for the complexity they introduced. 

Management consultants who institutionalized the inefficiencies they were paid to eliminate.

The AI era has produced its own version of this. It is more elegant than the historical ones — structurally self-reinforcing in a way the older models could only approximate. And Anthropic, with the quiet launch of code review as a product category following Claude Code, has demonstrated the loop with unusual clarity.

First, They Shipped the Generator

Claude Code is, at its core, an autonomous coding agent. It reads your codebase, writes implementations, refactors modules, scaffolds tests, and submits pull requests with the confidence of a senior engineer who has never experienced the social cost of a bad review. It is fast, tireless, and cheap. It is also — and this matters — statistically wrong in ways that are difficult to detect without reading every line it produces.



The product was sold, correctly, as a productivity multiplier. The pitch was straightforward: software engineering is bottlenecked on implementation speed, and Claude Code removes that bottleneck. Ship faster. Do more with fewer engineers. The implementation is no longer the hard part.

What this framing quietly omitted was the second-order effect. If you remove the implementation bottleneck, you do not get the same system running faster — you get a different system running under entirely new constraints. The bottleneck shifts. And the new bottleneck, almost inevitably, is verification.


The speed of generation outpaces the speed of comprehension. Code review was already the slowest lane on the engineering highway. Claude Code just added ten more lanes of traffic.

Every line that Claude Code writes must be read by someone who understands it well enough to sign off on it. That person is, in most organizations, increasingly rare. 

The engineers who remain after a round of AI-enabled headcount reduction are the ones reviewing output, not producing it. They were already stretched. Now they are reviewing five times as much code per day. Quality degrades. Bugs ship. Technical debt accumulates at the speed of token generation.


Then, They Shipped the Reviewer

The code review product is the second half of the loop. It reads the code — implicitly, the code that Claude Code wrote — and identifies issues, suggests improvements, flags security concerns, enforces architectural consistency. It is, in essence, an AI that reviews the output of a different AI trained by the same company, sold to the same customer, billed on the same invoice.

The symmetry is so clean it almost obscures the mechanism. But the mechanism is precise: Claude Code created the supply of unreviewed code. Code review created the demand for reviewing it. The company captures value on both ends of the transaction. The customer pays twice for a problem they did not have before they adopted the first product.


The Pattern, Precisely

This is not identical to the older consulting-firm model, where the problem was manufactured through advice. Here, the problem is an emergent property of the product itself. Claude Code does not intend to create review debt — it simply does, structurally, as a consequence of its own efficiency. It is the rational response to a real problem. The fact that the same company profits from both sides is not malfeasance. It is alignment.



This is what i call the induced demand pattern — AI tools that structurally generate the conditions for their own expansion. The code generation category is the clearest instance yet. Generate more code, create more review surface, sell more review tooling, use that revenue to train better generation models, which generate more code. The loop is not just self-sustaining. It is self-accelerating.


Why the Snake Eats Its Own Tail

The ancient image of a serpent consuming itself — was originally a symbol of cyclical renewal. The snake does not die; it feeds itself, perpetually. This is an accurate metaphor for what Anthropic has constructed.

The model that reviews the code learns from what it reviews. The patterns it flags become training signal for the model that writes the code next time. The review product improves the generation product, which increases the volume of code requiring review, which expands the market for the review product. There is no exterior — no part of this loop that does not feed back into the loop itself.






Compare this to the classical tech platform flywheel, where more users attract more sellers who attract more users. That loop is linear in its dependencies — it requires external participants at every node. The AI coding loop is tighter. The only external participant is the engineer, and even the engineer's role is progressively compressed as each generation of the model improves. The loop internalizes its own demand generation.


Implication for Engineers

The engineer who adopts Claude Code and then adopts the code review product has not automated away two separate problems. They have enrolled in a subscription to a problem-solution pair that is jointly managed by a vendor whose revenue depends on both sides of it remaining necessary. This is not a reason to reject the tools — the productivity gains are real, and the competitive pressure to adopt them is overwhelming. But it is a reason to be precise about what is actually happening.

The skills that used to be valuable in this workflow — the ability to write clean code quickly, to hold an architectural pattern in your head while implementing it — are being hollowed out from below. The skills that survive this compression are the ones at the top of the evaluation chain: the ability to read code written by someone else (or something else) and judge it accurately. The ability to know what a correct system feels like before you have built it. The ability to detect subtle errors in logic that no statistical model will flag because no statistical model has ever understood what the code is supposed to do.


The review product is not your ally in this dynamic. It is a product that profits most when the gap between what gets generated and what is actually correct remains large enough to require continuous attention.

This is the tension that no product announcement will name directly. Code review tooling, like all automated verification, has an incentive structure that is subtly misaligned with actually closing the verification gap. 

A perfect reviewer would put itself out of business. A profitable reviewer finds just enough to flag that you keep paying — while the deeper architectural drift, the slow divergence between what the system does and what it should do, accumulates beneath the surface of any automated check.


What the Pattern Predicts

If the induced demand pattern holds — and structurally, I believe it will — the next several years of AI developer tooling will follow a predictable shape. Every tool that accelerates a phase of the engineering lifecycle will create a corresponding tool that manages the debt that acceleration produces. Test generation will be followed by test quality analysis. Documentation generation will be followed by documentation accuracy verification. Architecture suggestion will be followed by architecture review.

Each pair will be sold by the same vendors, or by vendors whose incentives are structurally identical. Each pair will be presented as the solution to a problem, while quietly sustaining the conditions that make the problem recur. The stack will grow upward, each layer extracting value from the gap created by the layer below it.

The engineers who navigate this without becoming permanently dependent on it are the ones who maintain a clear model of what the system is supposed to do — not just what it currently does. That model is not a product. It cannot be sold, automated, or subscribed to. It is built slowly, through exposure to consequences, through the experience of being wrong in ways that matter and learning why.

Judgment compounds. Skills depreciate.

human judgment as cloud Function

Anthropic is not cynically manufacturing problems. The induced demand here is emergent, not engineered. But emergent does not mean neutral. The structure rewards continued dependence, punishes the development of in-house evaluation capability, and gradually transfers the judgment function — the most valuable thing an engineering team possesses — to a vendor whose model of your system is forever incomplete.

The snake eats its tail. The tail grows back. The snake is always hungry.



No comments:

Post a Comment