You Can't Automate Good Judgement
AI promises speed and efficiency, but it’s leaving many leaders feeling more overwhelmed than ever.
The real problem isn’t technology.
It’s the pressure to do more with less — without losing what makes your leadership effective.
BELAY created the free resource 5 Traits AI Can’t Replace & Why They Matter More Than Ever to help leaders pinpoint where AI can help and where human judgment is still essential.
At BELAY, we help leaders accomplish more by matching them with top-tier, U.S.-based Executive Assistants who bring the discernment, foresight, and relational intelligence that AI can’t replicate.
That way, you can focus on vision. Not systems.
Executive Summary
Claude Code reportedly reached $1B annualized revenue in 6 months—potentially the fastest developer tool growth ever recorded
MCP (Model Context Protocol) has achieved broad adoption, with OpenAI, Google, Microsoft, and AWS all supporting the standard
Enterprise LLM market share has shifted significantly toward Anthropic, though exact figures vary by source
The thesis: Anthropic is pursuing an infrastructure strategy (controlling how AI connects to tools) rather than purely competing on model capability
Key risks: OpenAI's distribution advantages, Google's integration depth, open-source alternatives, and execution challenges remain substantial
The Numbers in Context
Before analyzing strategy, let's examine the available data—with appropriate caveats about what we know and don't know.
Metric | Anthropic | OpenAI | Source/Confidence |
|---|---|---|---|
Enterprise LLM Share (2025) | ~40% | ~27% | Menlo Ventures* |
Enterprise Coding Share | ~54% | ~21% | Menlo Ventures* |
Est. Revenue Run Rate | ~$9B | ~$20B | Press reports** |
Claude Code Revenue (6mo) | ~$1B ARR | N/A | Anthropic claim |
MCP Monthly Downloads | 97M+ | Adopted | npm/PyPI data |
Consumer MAU | ~19M | ~800M | Press estimates** |
Menlo Ventures 2025 State of Generative AI Report. Enterprise surveys may have sampling bias. *Multiple press sources; private company financials are estimates.
⚠️ Methodological Note: Enterprise market share figures come from vendor surveys and customer interviews, which can suffer from selection bias. Revenue figures for private companies are estimates based on reported funding rounds, hiring data, and industry sources. Treat specific numbers as directional indicators rather than precise measurements.
The consumer vs. enterprise divergence is striking: OpenAI has roughly 40x more monthly active users, yet Anthropic reportedly captures a larger share of enterprise LLM spending. If accurate, this suggests fundamentally different go-to-market strategies rather than one company simply "winning."
The Infrastructure Thesis
The dominant AI narrative focuses on model benchmarks: who has the largest parameters, the highest scores on SWE-bench, the most impressive demos. This framing treats AI companies like sports teams competing for a championship.
An alternative framing—and the thesis of this analysis—is that model capability is becoming table stakes. The more durable competitive advantage may come from controlling the orchestration layer: the software that connects AI models to real-world tools, manages agent workflows, and ensures reliability at scale.
This is analogous to how cloud computing evolved. AWS didn't win by having the best servers. It won by defining the primitives (S3, EC2, Lambda) that became the vocabulary of cloud infrastructure. Similarly, Stripe didn't win payments by having the best fraud detection—it won by making integration trivially easy.
📈 The Bull Case: If models commoditize and MCP becomes the default integration standard, Anthropic could capture value at the orchestration layer regardless of which model is "best" at any given moment. This would be a more defensible position than pure model capability.
Model Context Protocol: What Actually Happened
In November 2024, Anthropic released the Model Context Protocol (MCP)—an open standard for connecting AI systems to external tools and data sources. The adoption timeline that followed was genuinely remarkable:
Date | Event | Significance |
|---|---|---|
Nov 2024 | Anthropic releases MCP | Open-source from day one |
Mar 2025 | OpenAI adopts MCP | Primary competitor validates standard |
Apr 2025 | Google confirms support | Three major providers aligned |
Aug 2025 | Microsoft integrates MCP | Enterprise distribution secured |
Nov 2025 | 97M monthly downloads | Developer adoption at scale |
Dec 2025 | Donated to Linux Foundation | Governance moves to neutral body |
The strategic insight here is subtle. By open-sourcing MCP and getting competitors to adopt it, Anthropic accomplished several things simultaneously:
It prevented a fragmented ecosystem of incompatible standards
It positioned itself as a neutral infrastructure provider rather than a walled garden
It ensured that tooling built for Claude would work with other models (reducing customer lock-in concerns)
The December donation to the Linux Foundation's Agentic AI Foundation—co-founded with OpenAI and Block—further reinforces the "neutral infrastructure" positioning. Anthropic retains influence through technical leadership and early-mover advantage, but governance is formally independent.
Claude Code: The Revenue Proof Point
Anthropic claims Claude Code reached $1 billion in annualized revenue within six months of its February 2025 launch. If accurate, this would represent unprecedented growth in developer tools—GitHub Copilot took over three years to reach comparable scale.
What distinguishes Claude Code from autocomplete-style coding assistants is its agentic architecture. Rather than suggesting the next line of code, Claude Code can autonomously:
Read entire codebases
Plan multi-step changes
Execute terminal commands
Iterate on complex tasks over extended periods
This is what some analysts call an "agentic harness"—the engineering layer that transforms a probabilistic language model into a system that reliably completes tasks. When Meta acquired the AI startup Manus for a reported $2 billion in December, industry observers noted they were buying orchestration capability, not model capability.
⚠️ What We Don't Know: The $1B figure is Anthropic's self-reported claim. We don't have independent verification of revenue, nor do we know the customer composition (how much is enterprise vs. individual developers), retention rates, or unit economics. The growth is impressive if accurate, but treat it as a data point rather than audited financials.
The Enterprise Market Share Shift
According to Menlo Ventures' annual survey, Anthropic's share of enterprise LLM API spending increased from 12% in 2023 to 24% in 2024 to approximately 40% in 2025. Over the same period, OpenAI's share declined from 50% to 40% to 27%.
Several factors may explain this shift:
Safety positioning: Anthropic's emphasis on Constitutional AI, responsible scaling policies, and transparent limitations resonates with enterprise risk managers. When a Fortune 500 CIO evaluates AI vendors, "we prioritize safety" is often more compelling than "we move fast."
Coding dominance: Developers are often the initial advocates for AI adoption within enterprises. Anthropic's consistent leadership on coding benchmarks (Claude Sonnet 3.5 in June 2024, maintained through Opus 4.5) created a developer constituency that pulled the company into enterprise accounts.
Business model alignment: OpenAI's January 2026 announcement of advertising in ChatGPT introduced a potential tension between user interests and advertiser interests. Anthropic's subscription-only model avoids this tension, which may matter more to enterprise buyers than consumers.
The Bear Case: What Could Go Wrong
A balanced analysis requires examining the substantial risks to Anthropic's infrastructure strategy:
1. OpenAI's Distribution Advantage
OpenAI has 800 million monthly active users to Anthropic's 19 million. ChatGPT is the default AI interface for most people. This creates powerful network effects: more users generate more data, which improves models, which attracts more users. OpenAI also has deep Microsoft integration, giving it distribution into every Windows device and Office installation.
2. Google's Integration Depth
Google controls Android, Chrome, Search, Gmail, and Workspace—touching billions of users daily. If Google successfully integrates Gemini across this surface area, the distribution advantage could be insurmountable. Google also has essentially unlimited compute resources and the ability to subsidize AI services indefinitely.
3. Open-Source Competition
Meta's Llama models and other open-source alternatives are rapidly improving. If enterprises can run capable models on their own infrastructure without paying API fees, the entire SaaS model for AI could be disrupted. This is particularly relevant for the most cost-sensitive applications.
4. MCP Is Not a Lock-In
MCP is an open standard that all major providers support. This was intentional—but it also means MCP doesn't create switching costs. A customer using MCP with Claude can switch to GPT or Gemini with minimal friction. The infrastructure play only works if Anthropic can build durable advantages on top of the standard.
5. Execution Risk
Anthropic is a much smaller company than its competitors. OpenAI has raised $20B+ and has Microsoft's backing. Google has essentially infinite resources. Anthropic has raised approximately $10B total. In a capital-intensive race, runway matters.
Strategic Implications
For readers making AI strategy decisions, here are the actionable takeaways:
Evaluate infrastructure, not just models
The model you choose today will likely be obsolete within 18-24 months. The integration patterns, tooling, and workflows you build will last longer. When evaluating AI vendors, ask about their approach to tool integration, agent orchestration, and standards support. MCP adoption is a reasonable proxy for infrastructure maturity.
Consider the trust differential
For sensitive workloads—healthcare, finance, legal, HR—vendor positioning on safety, privacy, and data handling isn't a nice-to-have. It's a selection criterion that affects regulatory risk and stakeholder buy-in. The ad-supported vs. subscription-only distinction may matter more than benchmark scores.
Avoid single-vendor lock-in
The AI landscape is evolving too rapidly to bet everything on one provider. Build abstractions that allow model switching. Use open standards like MCP. Maintain relationships with multiple vendors. The companies that preserve optionality will be better positioned regardless of how the competitive landscape evolves.
Watch the coding market
Developer tools are the leading indicator for enterprise AI adoption. Whichever company wins developer mindshare tends to follow that into broader enterprise deployments. The Claude Code growth—if it continues—signals where enterprise spending may shift over the next 12-24 months.
The Bottom Line
The infrastructure thesis is compelling but not proven. Anthropic is executing a differentiated strategy: build open standards, win enterprise trust through safety positioning, dominate the developer market, and expand from there. The early results—MCP adoption, Claude Code revenue, enterprise market share gains—are consistent with the strategy working.
But substantial risks remain. OpenAI and Google have distribution advantages that could prove decisive. Open-source alternatives continue improving. And MCP's openness, while strategically sound, doesn't create the lock-in that would make Anthropic's position truly defensible.
The honest assessment: Anthropic is executing well on a smart strategy, the numbers are impressive if accurate, but declaring them the winner of the AI race would be premature. This is a multi-year competition with several credible contenders, and the outcome is genuinely uncertain.
What is clear is that the "model race" framing misses important dynamics. Whoever controls how AI agents connect to the world—the orchestration layer—may capture more durable value than whoever has the highest benchmark scores at any given moment. Anthropic understands this. Whether they can execute against better-resourced competitors remains to be seen.
Sources & Further Reading
Menlo Ventures: 2025 State of Generative AI in the Enterprise
Anthropic Blog: Claude Code Announcement (February 2025)
Linux Foundation: Agentic AI Foundation Announcement (December 2025)
MCP Documentation: modelcontextprotocol.io
Note: Revenue figures for private companies are estimates based on press reports and industry sources

