Why Apple's $1B Intelligence Rental Is Actually Brilliant
Apple recently announced they're paying Google approximately $1 billion annually to power Siri with Gemini. It might looks like Apple admitting defeat in AI.
They got it exactly backwards.
Apple just confirmed what the smartest companies already know: If you have distribution and trust, renting intelligence is the power move. And Meta's desperate open-sourcing of Llama proves what happens when you lack the moat that makes renting viable.
Deal Everyone Misreading
Apple looked at building a competitive foundation model in-house. The real cost:
- $10-20 billion over 5 years
- Hundreds of ML researchers (competing with Google, OpenAI, Anthropic for talent)
- Building training infrastructure from scratch
- 3-5 years to catch up to Google's decade-long DeepMind advantage
Apple's response: "We'll pay you a billion a year to skip all that."
This isn't weakness. It is refusing to fight a multi-billion dollar battle in territory where you have zero advantage.
For $1 billion annually - 0.25% of their revenue, less than they spend on store design - Apple bought:
Optionality without commitment. If Google's models fall behind, switch to Anthropic, OpenAI, or whoever wins next generation. The integration layer (Private Cloud Compute, iOS hooks) works with any sufficiently capable model.
Speed without technical debt. Ship AI features this year instead of 2028. No sunk costs, no research teams to maintain, no infrastructure to depreciate.
Competitive intelligence. By being Google's customer, Apple sees exactly what's possible with current models, what's improving, what's still broken. Learning the constraints without paying for research.
But they did not rent many things like relationship with 2 billion iOS device owners. App Store monopoly developers can't escape. Hardware-software integration that took twenty years to build. The premium pricing power from brand trust.
Apple is renting the commodity and owning the moat.
What Meta's Strategy Actually Teaches
Meta spent billions building Llama, then gave it away for free. Open source. No licensing fees. Available to anyone, including direct competitors.
This isn't generosity. This is a calculated move to prevent moats from forming in the intelligence layer.
Meta absolutely COULD rent models like Apple does - nothing stops them from using Claude, GPT, or Gemini. They have the money, the distribution, 3 billion users. But at their scale, with their existing compute infrastructure and data, building is actually cost-effective.
Strategic choice isn't BUILD vs RENT. It's what they did AFTER building: They gave it away for free.
Here's why: Meta's actual business is advertising on social feeds. Nightmare scenario is Google or OpenAI building such a strong moat in AI that it becomes a competitive bottleneck. Imagine if accessing frontier AI required paying Google, and Google could prioritize their own products or charge Meta premium rates.
By open-sourcing Llama, Meta is trying to: Commoditize intelligence to prevent anyone from building a moat there.
If Llama is free and good enough, then intelligence can't become a competitive advantage. If it's free, Google can't charge Apple $1 billion for preferential access. If it's free, startups can't build AI-native products that threaten Meta's ad business without Meta having access to the same capabilities.
This is a deliberate choice about WHERE moats should form. Meta is saying: "We're fine with moats in distribution, in user relationships, in advertising infrastructure. But intelligence itself? We need that to stay commodity."
Uncomfortable Insight
Apple-Meta comparison reveals: Different companies want moats in different layers.
Apple can rent because their moat is in ecosystem and distribution:
- 2 billion devices creating lock-in
- An ecosystem developers can't afford to leave
- Premium pricing power from brand trust
- Hardware integration creating switching costs
For Apple, intelligence being commodity is PERFECT. It means they can access the best models without building research teams, and their real advantages (ecosystem control, user trust) remain defensible.
Meta builds and open sources because they want to PREVENT moats from forming in intelligence:
- They have the scale and infrastructure to build cost-effectively
- They need intelligence to stay commodity to protect their ad business
- They can't afford Google or OpenAI controlling access to frontier AI
Apple pays $1 billion to rent what Meta gives away for free. But they're not in the same strategic position - they're pursuing opposite goals.
Apple wants intelligence to be rented infrastructure (keeps it commodity, lets them focus on ecosystem). Meta wants intelligence to be free infrastructure (keeps it commodity, prevents competitors from building moats there).
Both strategies treat intelligence as commodity. The difference is how they achieve that outcome.
Pattern Across Big Tech
Amazon is doing the Apple strategy at scale. AWS Bedrock hosts everyone's models - Claude, Llama, Cohere, their own Titan. They don't care who wins the model race because infrastructure is their moat.
Google is the only one playing both sides profitably. They sell to Apple ($1B/year), power their own products, AND offer Vertex AI to enterprises. But even Google's strategy depends on their search monopoly - the intelligence itself is just one revenue stream. To some extend they are also thinking like META
Companies that RENT to keep intelligence commodity:
- Apple (ecosystem lock-in is the moat)
- Amazon (infrastructure dominance is the moat)
- Enterprise SaaS with strong moats (Salesforce, Adobe)
These companies WANT intelligence to be rented commodity infrastructure. It protects their actual moats.
Companies that BUILD then OPEN SOURCE to keep intelligence commodity:
- Meta (prevent competitors from building intelligence moats)
- Mistral (European AI sovereignty positioning)
These companies spend billions building, then give it away to prevent moats from forming in the intelligence layer.
Companies trying to BUILD moats IN intelligence:
- OpenAI (model quality as primary differentiation)
- Anthropic (model quality + safety positioning)
Ecosystem Defense Through Rental
Apple's strategy is more sophisticated than just "rent the AI." They're using rented intelligence to strengthen their ecosystem while avoiding the sunk cost trap that kills tech giants.
The playbook:
- Rent frontier models to ship competitive AI features fast
- Build the integration layer in-house (Private Cloud Compute, iOS hooks)
- Own the user relationship and trust (privacy positioning)
- Let model providers compete for their business
- Switch providers when someone gets better
Every AI feature makes iOS more valuable. Every AI integration makes it harder to leave Apple's ecosystem. But none of it requires winning the model training arms race.
Compare Meta's position. They spend billions on Llama to:
- Prevent Google/OpenAI from building intelligence moats
- Protect their ad business from AI disruption
- Maybe get PR credit for "open source leadership"
One company strengthens their moat. The other desperately tries to prevent competitors from building one.
Story from Model builder
They're optimized for different strategic contexts.
Companies trying to build proprietary moats IN intelligence itself (OpenAI, Anthropic) are betting against both Apple AND Meta's preferred outcome.
If either Apple or Meta succeeds in keeping intelligence commodity - whether through competitive rental markets or open source proliferation - then intelligence itself can't be a sustainable moat.
The smartest strategic question isn't "should I build or rent intelligence?"
It's "where do I want moats to form, and does my intelligence strategy support or undermine that goal?"
No comments:
Post a Comment