Home AIOpenAI, Anthropic, and the $121 Billion Question: Can AI’s Biggest Labs Outgrow Their Compute Bills?

OpenAI, Anthropic, and the $121 Billion Question: Can AI’s Biggest Labs Outgrow Their Compute Bills?

by Vamsi Chemitiganti

In my recent trilogy analyzing AI market concentration — “Is There An AI Concentration Crisis: When 42 Stocks Become the Entire Market,” “Why Enterprise AI Strategy Must Diverge From Hyperscaler Playbooks,” and “Navigating the AI Concentration: The Three Questions Every Enterprise Must Answer” — I argued that enterprises cannot and should not attempt to compete with organizations spending 70% of revenue on AI infrastructure. The confidential financial documents that OpenAI and Anthropic shared with investors ahead of their 2026 funding rounds, recently reported by the Wall Street Journal, prove this thesis more emphatically than I could have imagined.

That is Not a typo. This is the business model.

Let me start with a number that should make every enterprise technology leader pause: $121 billion.

That is what OpenAI expects to spend on computing power for AI research in a single year — 2028. To put that in perspective, it exceeds the entire GDP of over 130 countries. And even after nearly doubling its revenue from the prior year, OpenAI still anticipates burning $85 billion in losses that year alone. Losses that would dwarf those of virtually any public company in history.

The Structural Economics of Building Intelligence

Both OpenAI and Anthropic are racing toward potentially record-breaking IPOs by year-end. The financials tell a story that every CIO and CFO needs to internalize — not because you are investing in these IPOs, but because the cost structure of the AI models your enterprise depends on is fundamentally unsustainable under current economics.

Here is the core tension:

Training costs are exponential. Each jump in model intelligence is harder to achieve and costs more than the last. OpenAI’s AI model training costs are projected to escalate from roughly $25 billion in 2026 to $121 billion by 2028 and $125 billion in 2029, before dipping back below $100 billion in 2030. Anthropic does not expect to spend nearly as much, but its projections tell a structurally similar story of mounting compute costs.

Revenue is explosive — but not explosive enough. OpenAI is generating approximately $2 billion in monthly revenue, with an annualized run rate around $24 billion. Enterprise usage now accounts for over 40% of total revenue. Anthropic has been even more remarkable on the growth front — surging from $1 billion in annualized revenue in January 2025 to $9 billion by end of 2025, and an astonishing $30 billion ARR by April 2026. That is a 30x surge that briefly pushed Anthropic’s implied market value past OpenAI on secondary markets.

Yet even this hypergrowth cannot outrun the compute bill.

The Two-Ledger Problem

Both companies now report two different measures of profitability — one that includes model training costs, and one that strips them out. This accounting duality is itself a signal worth examining.

Excluding training costs, OpenAI is on track to turn a small pretax operating profit this year. Anthropic tells a similar story under its best-case scenario. The core business of serving inference queries to paying customers is approaching viability.\

Including training costs, OpenAI does not expect to break even until the 2030s. Anthropic forecasts reaching that milestone sooner, but the trajectory remains one of sustained, massive cash burn.

This two-ledger approach is not just creative accounting — it reveals a fundamental architectural truth about the AI industry. The operational economics of running AI models (inference) are improving. The research economics of building the next generation of models (training) are deteriorating. These are two structurally different cost curves moving in opposite directions.

The Inference Cost Trajectory — A Silver Lining With Caveats

There is a genuinely positive signal buried in these financials. Inference costs — the billions spent processing queries on deployed AI systems — currently consume more than half of revenue for both companies. But that percentage is expected to decline over time, indicating that the technology is becoming cheaper to run at scale.

However, there is a critical nuance that enterprise leaders must understand: only a tiny fraction of ChatGPT’s users actually pay for the service. OpenAI does not generate revenue for a large portion of its inference costs. The company’s rationale is that free users drive broader adoption, and can eventually be monetized through advertising or subscription conversion. Most of Anthropic’s revenue, by contrast, comes from enterprises — a structurally healthier model from a unit economics perspective.

For enterprise buyers, this distinction matters. When you evaluate your AI vendor’s pricing stability, you need to understand whether you are subsidizing a consumer freemium strategy or paying into a sustainable enterprise business model.

What This Means for Enterprise AI Strategy

I have written extensively about why enterprise AI strategy must diverge from hyperscaler playbooks. These financial disclosures reinforce three critical imperatives:

  1. Vendor Concentration Risk Is Real and Growing

Both OpenAI and Anthropic will burn through enormous amounts of cash in the coming years. They are counting on IPO investors to buoy their businesses. Bankers are even lobbying index providers like Nasdaq to loosen listing rules so these companies can access broader capital pools faster. When your core AI vendor’s business model depends on continuously raising capital at unprecedented scale, that is a supply chain risk that belongs on every CIO’s risk register.

  1. The Training Cost Arms Race Will Reshape Pricing

OpenAI and Anthropic are releasing new model versions at a faster cadence than ever, while pouring more resources into each training run. This arms race shows no signs of slowing. The costs will eventually flow downstream. Enterprise customers enjoying aggressive introductory pricing should be planning for a world where AI model access becomes significantly more expensive — or where vendors pivot to extract more value from their highest-paying customers.

  1. The Enterprise Moat Is in Application, Not Infrastructure

As I argued in “Why Enterprise AI Strategy Must Diverge From Hyperscaler Playbooks,” enterprises should not be competing at the infrastructure layer. These financials make the case irrefutable. Instead, the enterprise advantage lies in proprietary data, domain-specific fine-tuning, workflow integration, and the organizational capability to deploy AI at the application layer — what I have called the “AI Factory” operating model.

The IPO Question — And Why It Is Really an Enterprise Question

The coming OpenAI and Anthropic IPOs will be landmark events in technology history. But for enterprise technology leaders, the real question is not whether to buy the stock. It is whether the economic model underpinning your AI strategy is built on sustainable foundations.

When a company projects $85 billion in annual losses and frames it as an investment thesis, we are in uncharted territory. The venture capital firms that have stomached these losses did so because OpenAI and Anthropic are among the fastest-growing businesses in the history of technology. But growth and sustainability are not the same thing.

The competitive dynamics are also worth watching closely. OpenAI was reportedly caught flat-footed when Anthropic released a new version of Claude Code last fall, and has since poured more resources into its Codex coding assistant while prioritizing enterprise sales. This kind of reactive spending — where competitive pressure forces accelerated investment regardless of ROI — is precisely the dynamic that makes the cost trajectory so difficult to control.

The Bottom Line

We are witnessing the most capital-intensive technology buildout since the transcontinental railroad — except the railroad eventually generated predictable, sustainable revenue from moving physical goods. The AI labs are building something far more abstract, and the path to sustainable economics remains, at best, a projection on a spreadsheet.

For enterprise leaders, the strategic imperative is clear: build your AI capabilities on the assumption that the current pricing and availability of frontier models is temporary. Invest in portability. Invest in multi-model architectures. Invest in the organizational muscle to extract value from AI at the application layer, where your domain expertise — not your compute budget — is the differentiator.

The IPO prospectuses will tell a story of revolutionary technology and exponential growth. The balance sheets tell a more sobering story of an industry that has not yet solved its own economics. Both stories are true. The enterprise strategist’s job is to build for both realities simultaneously.

This analysis is based on financial data reported by the Wall Street Journal (https://www.wsj.com/tech/ai/openai-anthropic-ipo-finances-04b3cfb9?mod=article_inline) from confidential documents shared with investors ahead of OpenAI and Anthropic’s 2026 funding rounds, as well as revenue figures reported by Sacra and other financial analysts.

Featured image designed by Freepik

Disclaimer

This blog post and the opinions expressed herein are solely my own and do not reflect the views or positions of my employer. All analysis and commentary are based on publicly available information and my personal insights.

Discover more at Industry Talks Tech: your one-stop shop for upskilling in different industry segments!

Ready to master the future of telecom? My book, “Cloud Native 5G – A Modern Architecture Guide: From Concept to Cloud: Transforming Telecom Infrastructure (Industry Talks Tech)” is now available on Amazon.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.