Thursday, 13 November 2025

Agentic Commerce War - Car with bumpy roads

 There's a lawsuit that should make every engineer building AI applications pause and think carefully about the world they're creating. Amazon is suing Perplexity AI, and while the legal complaint talks about "covert access" and "computer fraud," what's really happening is far more interesting: we're watching the first shots fired in a war over who gets to control the future of commerce.

"AI revolution" in shopping is probably going to make markets less competitive, not more. Let me explain why.




The Pattern-Matching Disguised as Innovation



We've seen this movie before. In the 2000s, it was about who controls app distribution. Apple and Google built "open" platforms, welcomed developers, then extracted 30% rent from everyone. In the 2010s, it was about who controls attention. Facebook and Google became the gatekeepers to your customers, then jacked up ad prices once you were dependent on them.

Now we're in the 2020s, and the game is about who controls shopping intent. The technology changed—from apps to ads to AI agents—but the fundamental power dynamics remain depressingly familiar.

Agentic commerce is the fancy term for AI systems that can shop on your behalf. Tell ChatGPT you need running shoes under $100, and it searches stores, compares options, and completes the purchase. No browsing, no clicking through pages, no "adding to cart." The AI does it all.

McKinsey forecasts this could generate $1 trillion in global commerce by 2030. Traffic to U.S. retail sites from GenAI browsers already jumped multi fold year-over-year in 2025. 

Amazon vs. Perplexity: A Case Study in Platform Power

Here's what actually happened, stripped of the legal jargon:

November 2024: Amazon catches Perplexity using AI agents to make purchases through Amazon accounts. They tell Perplexity to stop. Perplexity agrees.

July 2025: Perplexity launches "Comet," their AI browser that can shop for you. Price tag: $200/month.

August 2025: Amazon detects Comet's agents are back, but this time they're disguised as Google Chrome browsers. Amazon implements security measures to block them.

Within 24 hours: Perplexity releases an update that evades Amazon's blocks.

November 2025: Amazon files a federal lawsuit accusing Perplexity of violating the Computer Fraud and Abuse Act. Perplexity publishes a blog post titled "Bullying is Not Innovation."

Now, you might think this is about security or customer protection. And sure, those are real concerns—when AI agents access customer accounts, make purchases, and handle payment data, security matters enormously.

But let's be honest about what's actually happening here: Amazon is defending its moat.




Amazon built a trillion-dollar business by owning the customer relationship. They know what you buy, when you buy it, how much you're willing to pay, and what you'll probably want next. This data advantage is what makes Amazon Rufus (their own shopping agent) dangerous to competitors—it already knows you better than any third-party agent ever could.

If Perplexity's agents can freely roam Amazon's platform, comparison-shop ruthlessly, and complete purchases without Amazon controlling the experience, then Amazon loses three critical things:

  1. The ability to show you ads for products they want you to buy
  2. The ability to promote their own private-label brands
  3. The data about what AI-assisted shopping actually looks like

This is Amazon's "app store moment." And they learned from Apple: if you're going to allow third parties to build on your platform, you need to control who gets access and extract rent from those you approve.

Architecture of Control: How This Actually Works

Let's talk about the technical stack for a moment, because this is where it gets interesting from an engineering perspective.

The Five-Layer Problem

Layer 1: Consumers delegate shopping tasks to AI agents, often paying $20-200/month for the privilege.

Layer 2: AI Agents (ChatGPT Operator, Perplexity Comet, Amazon Rufus) search, compare, and transact on your behalf.

Layer 3: Trust & Payment Infrastructure (Visa, Mastercard, Stripe) verify agent identity and process payments.

Layer 4: Platform Gatekeepers (Amazon, Google, Apple) control access to inventory and customer data.

Layer 5: Merchants & Brands fulfill orders and watch their margins compress.

Where power concentrates: not at the AI layer where everyone's focused, but at Layer 3 (payments) and Layer 4(Platform Gatekeepers)




Why Payments Matter More Than You Think

Visa and Mastercard are quietly positioning themselves as the critical trust infrastructure for agentic commerce. They're partnering with Cloudflare to implement Web Bot Auth—a cryptographic authentication protocol that lets merchants verify which AI agents are legitimate.

Think about the implications: if every agentic transaction must flow through payment network authentication, then Visa and Mastercard become the gatekeepers of which agents can transact at all. They've turned themselves into the identity verification layer for AI agents, which means they can collect tolls on the entire ecosystem.

This is brilliant infrastructure play. While everyone's fighting over the AI layer, the payment networks are becoming the new platform.

The Security Nightmare Nobody Wants to Talk About

Here's the thing that keeps security engineers up at night: traditional fraud detection assumes humans are making purchases. You can look at behavioral patterns, device fingerprinting, velocity checks—all the usual signals that distinguish legitimate users from attackers.

But what happens when the "legitimate" user is an AI agent that behaves like a bot because it is a bot?

The attack surface is enormous:

  • Agent manipulation: Increase vulnerability rate for tricking AI agents with fake listings or manipulated reviews
  • Automated account takeover: AI can run credential stuffing attacks at scale, then use compromised accounts to make "legitimate" agent purchases
  • Synthetic identity fraud: Generate deepfakes and fake identities that pass agent verification
  • Phishing at industrial scale: AI-generated personalized phishing that tricks both humans and other agents

To successfully implement agentic commerce, you need to solve the impossible problem: identify agents, distinguish legitimate from malicious ones, verify consumer intent, and do all of this in real-time at massive scale.

This isn't just a "hard problem"—it requires fundamentally rethinking identity, authentication, and trust in ways our current infrastructure wasn't designed for.

Legal Black Hole

The most fascinating aspect of this entire situation is that nobody knows what the law actually says about AI agents making purchases on your behalf.

Consider this scenario: Your AI agent buys you running shoes. They don't fit. Who's responsible?

  • Is the AI agent your "employee" acting under your authority?
  • Is it a contractor working for the AI company?
  • Did you actually "agree" to the purchase, or did the AI misinterpret your intent?
  • Can you return them under standard return policies, or do different rules apply?

The Uniform Electronic Transactions Act (UETA) and E-SIGN Act validate electronic signatures and contracts, but they were written assuming humans click "I agree." They don't tell us how to handle situations where an AI system makes autonomous decisions based on high-level instructions like "buy me running shoes under $100."

And it gets worse. When things go wrong—the agent buys the wrong product, accesses the wrong account, or exposes payment data—who's liable?

The legal frameworks assume someone clicked a button and agreed to terms. But with agentic AI:

  • The consumer gave high-level intent ("I need shoes")
  • The AI developer built the agent with certain objectives
  • The platform (Amazon) sets rules about what's allowed
  • The payment processor enables the transaction

When something breaks, you've got four parties pointing at each other saying "not my fault."

This isn't edge case stuff—this is the fundamental contract law question that needs answering before any of this scales. And right now? It's a complete void.

 Three Scenarios 

Based on the current trajectory, few things could happen:

Scenario 1: Platform Dominance (Very High Probability)

Amazon wins the lawsuit. Google, Apple, and other major platforms watch carefully and implement similar policies. The outcome:

  • Platforms allow only "approved" agents
  • Approved agents must share 15-30% revenue with platforms
  • Platforms build superior first-party agents using proprietary data
  • Market concentration increases dramatically

This is the most likely outcome because platforms hold all the leverage. They control access to inventory, customer data, and the ability to transact. If you want your AI agent to work, you play by their rules or you don't play at all.

Winner: Existing platform giants. The "disruption" looks suspiciously like the old oligopoly, just with AI agents instead of apps.

Scenario 2: Payment Network Mediation (Medium probability)

Visa and Mastercard successfully establish themselves as neutral trust brokers. Their authentication standards become mandatory. Multiple agents can compete, but all must register with payment networks and follow their protocols.

This creates a more open ecosystem than Scenario 1, but you've still got gatekeepers—just different ones. Every transaction generates payment network fees. The rails change hands, but someone still controls the rails.

Winner: Payment networks become infrastructure monopolies. Better than platform domination, but not exactly a free market.

Scenario 3: Regulatory Intervention (Very low)

Governments step in, mandate open access standards, require algorithmic transparency, and force interoperability. The EU tries this first with AI Act enforcement.

Winner: Consumers and smaller players benefit from enforced competition.

Reality check: Given current U.S. regulatory momentum and the fact that legal frameworks are years behind AI development, this seems highly unlikely. The platforms are moving too fast, and regulators are too slow.

Why This Probably Makes Markets Less Competitive

Here's the uncomfortable truth: despite all the talk about AI "democratizing" commerce and creating more efficient markets, the likely outcome is increased market concentration.

Why? 

Trust Concentrates Around Scale

When AI agents are making autonomous purchases with your money, you need to trust them completely. That trust is hard to build and easy to destroy. Large, established players like Amazon can credibly say "we've processed billions of transactions, here's our security track record."

A startup building a shopping agent? Much harder sell. The trust moat actually gets deeper, not shallower.

Data Moats Become too big wall to jump

The best shopping agent needs to know:

  • Your purchase history
  • Your preferences and budget
  • Your calendar and schedule
  • Your payment methods and addresses
  • Context about why you're shopping

Amazon already has all of this. Google has most of it. A third-party agent has... whatever you manually tell it.

This isn't a gap you can close with "better AI." It's a fundamental data disadvantage that compounds over time.

Network Effects Intensify

Just as traditional commerce requires an ecosystem (platforms, payment processors, logistics, fraud prevention), agentic commerce needs an even more complex interconnected system. The platforms that can bundle these services—authentication, payments, fulfillment, customer service—win by default.

It's the AWS playbook: provide the full stack, make integration seamless, and competitors can't match the convenience.

Power to Block Is Power to Control

This is the key insight from the Amazon-Perplexity fight: if platforms can simply block agents they don't like, then innovation requires permission.

Want to build a revolutionary shopping agent? Great. But if Amazon, Google, and Walmart all block you, your revolutionary agent can't access any inventory. You've built a car with no roads to drive on.

The platforms learned from the app store wars: let a thousand flowers bloom, then harvest the ones that matter.

What This Means for Engineers Building AI Applications

If you're working on AI agents, here's what you need to understand:

Platform Risk Is Your Existential Risk

Don't build on platforms you don't control unless you have explicit agreements in place. The terms of service you're operating under were written before agentic AI existed, and platforms can change the rules whenever they want.

Perplexity is learning this the hard way. They built a business model that required access to Amazon's platform, then discovered Amazon could just say "no."

The Liability Problem Won't Solve Itself

Right now, there's massive ambiguity about who's responsible when AI agents screw up. This ambiguity is risk for everyone in the stack. You need to:

  • Get explicit terms in writing about agent behavior and limits
  • Build audit trails for every decision your agent makes
  • Have clear escalation paths when things go wrong
  • Understand you're probably liable for your agent's actions, even if that's not fair

Security Can't Be an Afterthought

The threat model for agentic commerce is genuinely novel. You can't just apply traditional bot detection because legitimate agents are bots. You need:

  • Cryptographic agent authentication (like Web Bot Auth)
  • Behavioral anomaly detection that works for non-human actors
  • Multi-party verification for high-value transactions
  • Fallback to human-in-the-loop when confidence is low

This is hard, expensive, and essential. The first major security breach involving agent-based shopping will tank consumer trust in the entire category.

Think in Systems, Not Just Models

The failure mode for agentic commerce isn't "the AI makes a mistake." It's "the AI makes a reasonable-seeming decision based on incomplete data, which cascades into a mess of returns, chargebacks, and customer service nightmares."

Good agentic systems need:

  • Clear boundaries on what decisions they can make autonomously
  • Confidence thresholds that trigger human review
  • Graceful degradation when uncertain
  • Mechanisms for users to understand and override decisions

This is systems engineering, not just prompt engineering.

So Who Wins?

If you're asking "who wins the agentic commerce war," here's my read:

Tier 1: Platform Oligarchs (Amazon, Google, Apple) - They control access to inventory and customers. They can block competitors and extract rent from those they allow. Amazon's lawsuit against Perplexity is them establishing this reality.

Tier 2: Payment Networks (Visa, Mastercard) - Becoming the critical trust infrastructure. Every transaction flows through them, and they're setting authentication standards for the entire ecosystem.

Tier 3: AI Insurgents (OpenAI, Perplexity, Anthropic) - High risk, high reward. They have the AI capabilities and consumer mindshare, but they need platform access to deliver value. Many will get squeezed or forced into revenue-sharing deals.

The Losers: Traditional retailers and brands who get reduced to "background utilities" in agent-controlled marketplaces. TripAdvisor already down 30% in traffic. AllRecipes lost 15%. This is the canary in the coal mine.

The uncomfortable parallel: this is the app store model all over again. Platforms create "open" ecosystems, welcome innovation, then monetize, control, and eventually squeeze everyone building on top.

In five years, we'll have agentic commerce. But it will likely be dominated by 3-5 massive platforms that control access, set standards, and extract rent. The "revolution" will look suspiciously like the old regime—just with better AI.

The Bottom Line

Agentic commerce is coming whether we're ready for it or not. The technology works, the market opportunity is massive, and the big platforms are already building it.

But let's not fool ourselves about what we're building. This isn't some perfect future where AI agents create perfect market efficiency and infinite consumer choice. It's a new battleground for the same old fight: who gets to control access to customers, and who gets to extract rent from transactions.

Amazon is suing Perplexity because they understand what's at stake. This isn't about "covert access" or "customer security"—those are the legal justifications. The real fight is about whether Amazon gets to control agentic commerce the same way Apple controlled app distribution and Google controlled digital advertising.

And based on history, platform power, and the economics of trust at scale, they probably will.

We've seen this movie before. The technology is new, but the plot is depressingly familiar.


No comments:

Post a Comment