Anthropic has committed to spending more than $100 billion on chips and computing power from Amazon over the next decade, securing up to 5GW of capacity to train and run Claude. Amazon, in turn, invests $5bn immediately at Anthropic's $380 billion valuation, with up to $20 billion more over time.
Pair this with deals struck earlier this month with Google and Broadcom, and Anthropic is quietly assembling roughly 10GW of total compute capacity, enough to sustain frontier model development well into the next decade.
At current market prices, $100 billion buys approximately 2GW of capacity. Anthropic's broader infrastructure push implies total spend well in excess of the headline figure.
A structure that is circular by design
This isn't a conventional procurement deal. Anthropic raises equity from hyperscalers, then deploys the proceeds back to those same partners through chip and cloud purchases. Amazon benefits twice over, on its equity position and on revenue from its Trainium chip series, which it is actively positioning as a rival to Nvidia's GPUs.
It's a model OpenAI has adopted in parallel, having secured up to $50 billion from Amazon after renegotiating its exclusive arrangement with Microsoft. The implication is structural: the frontier AI capital cycle is consolidating around a handful of hyperscaler and chipmaker relationships, and access to that infrastructure is becoming as competitively significant as the models themselves.
Why it matters now
Anthropic's annualised revenue has surged from $9 billion at end-2024 to over $30 billion. The broader rollout of Claude Mythos, currently limited to select partners, will push compute demand higher still. For investors tracking the private AI landscape, the question is no longer who has the best model. It's who has locked in the infrastructure to run it at scale.



