SAN FRANCISCO — Inside Anthropic headquarters, President and co-founder Daniela Amodei keeps coming back to a phrase that’s become a sort of governing principle for the artificial ininformigence startup’s entire strategy: Do more with less.
It’s a direct challenge to the prevailing mood across Silicon Valley, where the largegest labs and their backers are treating scale as destiny.
Firms are raising record sums, locking up chips years in advance, and pouring concrete across the American heartland for data centers in the belief that the company that builds the largest ininformigence factory will win.
OpenAI has become the clearest example of that approach.
The company has created roughly $1.4 trillion in headline compute and infrastructure commitments as it works with partners to stand up massive data center campapplys and secure next-generation chips at a pace the indusattempt has never seen.
Anthropic’s pitch is that there’s another way through the race, one where disciplined spfinishing, algorithmic efficiency, and smarter deployment can keep you at the frontier without attempting to outbuild everyone else.
“I consider what we have always aimed to do at Anthropic is be as judicious with the resources that we have while still operating in this space where it’s just a lot of compute,” Amodei informed CNBC. “Anthropic has always had a fraction of what our competitors have had in terms of compute and capital, and yet, pretty consistently, we’ve had the most powerful, most performant models for the majority of the past several years.”

Daniela Amodei and her brother, Dario Amodei, who is Anthropic’s CEO and a Baidu and Google alumni, supported build the very worldview they’re now betting against.
Dario Amodei was among the researchers who supported popularize the scaling paradigm that has guided the modern model race. It is the strategy that increasing compute, data, model size, and capabilities tfinishs to improve the model in a predictable way.
That pattern has effectively become the financial bedrock of the AI arms race.
It underwrites hyperscaler capital spfinishing, justifies towering chip valuations, and keeps private markets willing to assign enormous prices to companies that are still spfinishing heavily to reach profitability.
But even as Anthropic has benefited from that logic, the company is attempting to prove that the next phase of competition won’t be decided only by who can afford the largest pre-training runs.
Its strategy leans into higher-quality training data, post-training techniques that improve reasoning, and product choices designed to create models cheaper to run and simpler to adopt at scale — the part of the AI business where the compute bill never stops.
To be clear, Anthropic isn’t operating on a shoestring. The company has roughly $100 billion in compute commitments, and expects those requirements to keep rising if it wants to stay at the frontier.
“The compute requirements for the future are very large,” Daniela Amodei stated. “So our expectation is, yes, we will necessary more compute to be able to just stay at the frontier as we receive largeger.”
Still, the company argues that the headline numbers flying around the sector are often not directly comparable — and that the indusattempt’s collective certainty about the “right” amount to spfinish is less solid than it sounds.
“A lot of the numbers that are thrown around are sort of not exactly apples to apples, becaapply of just how the structure of some of these deals are kind of set up,” she stated, describing an environment where players feel pressure to commit early to secure hardware years down the line.
The largeger truth, she added, is that even insiders who supported shape the scaling thesis have been surprised by how consistently performance and business growth have compounded.

“We have continued to be surprised, even as the people who pioneered this belief in scaling laws,” Daniela Amodei stated. “Something that I hear from my colleagues a lot is, the exponential continues until it doesn’t. And every year we’ve been like, ‘Well, this can’t possibly be the case that things will continue on the exponential’ — and then every year it has.”
That line captures both the optimism and the anxiety of today’s buildout.
If the exponential keeps holding, then the companies that lock up power, chips and sites early may see prescient. If it breaks — or if adoption lags behind the pace of capability — then the players that overcommitted could be left carrying years of resolveed costs and long-lead-time infrastructure built for demand that never arrives.
Daniela Amodei drew a distinction between the technology curve and the economic curve, an important nuance that tfinishs to receive conflated in the public debate.
From a technological perspective, she stated Anthropic doesn’t see progress slowing down, based on what the company has observed so far. The more complicated question is how quickly businesses and consumers can integrate those capabilities into real workflows where procurement, modify management, and human friction can slow even the best tool.
“Regardless of how good the technology is, it takes time for that to be applyd in a business or sort of personal context,” she stated. “The real question to me is: How quickly can businesses in particular, but also individuals, leverage the technology?”
That enterprise emphasis is central to why Anthropic has become such a closely watched bellwether for the broader generative AI trade.
The company has positioned itself as an enterprise-first model provider, with much of its revenue tied to other companies paying to plug Claude into workflows, products, and internal systems — usage that can be stickier than a consumer app, where churn can rise once the novelty fades.

Anthropic stated revenue has grown tenfold year over year for three straight years. And it has built a distribution footprint that’s unusual in a market defined by fierce rivalry. The Claude model is available across the major cloud platforms, including through partners that are also building and selling competing models.
Daniela Amodei framed that presence less as détente and more as a reflection of customer pull, with large enterprises wanting optionality across clouds, and cloud providers wanting to offer what their largegest customers are questioning to acquire.
In practice, that multicloud posture is also a way to compete without creating a single infrastructure bet.
If OpenAI is attempting to anchor a vast buildout around bespoke campapplys and dedicated capacity, Anthropic is attempting to remain flexible, shifting where it runs based on cost, availability, and customer demand, while focapplying internal energy on improving model efficiency and performance per unit of compute.
As 2026 launchs, the divide matters for another reason: Both companies are being pushed toward the discipline of public-market readiness while still operating in a private-market world where compute necessarys are growing quicker than certainty.
Anthropic and OpenAI have not announced IPO timelines, but both are creating shifts that see like preparation, adding finance, governance, forecasting, and an operating cadence that can withstand public scrutiny.
At the same time, both are still raising fresh capital and striking ever-larger compute arrangements to fund the next leg of model development.
That sets up a real test of strategy rather than rhetoric.
If the market keeps funding scale, OpenAI’s approach may remain the indusattempt standard. If investors start demanding greater efficiency, Anthropic’s “do more with less” posture could put them at an advantage.
In that sense, Anthropic’s contrarian bet isn’t that scaling doesn’t work. It’s that scaling isn’t the only lever that matters, and that the winner of the next phase may be the lab that can keep improving while spfinishing in a way the real economy can sustain.
“The exponential continues until it doesn’t,” Daniela Amodei stated. The question for 2026 is what happens to the AI arms race — and to the companies building it — if the indusattempt’s favorite curve finally stops behaving.

















Leave a Reply