Anthropic has stayed competitive in the AI race by focusing on efficiency rather than sheer scale, according to president and co-founder Daniela Amodei, who outlined the company’s approach in an interview with CNBC.
While rivals across Silicon Valley are racing to secure massive data centers and lock up chips years in advance, Anthropic is betting that smarter algorithms, higher-quality data and disciplined spending can keep it at the technological frontier without outbuilding everyone else. Amodei said Anthropic has often operated with far less compute and capital than competitors, yet has consistently delivered top-tier models.
The strategy contrasts sharply with OpenAI, which has made roughly $1.4 trillion in headline compute and infrastructure commitments as it pursues scale as a competitive advantage. Anthropic, by comparison, says progress will not be determined solely by the size of pre-training runs.
That does not mean Anthropic is avoiding scale altogether. The company has about $100 billion in compute commitments and expects those needs to grow. But Amodei argued that industry spending figures are often not directly comparable and that pressure to overcommit early could leave some players exposed if demand fails to keep pace.
A key uncertainty, she said, is not technological progress but adoption. Even as model capabilities improve rapidly, businesses and individuals may take longer to integrate AI into real workflows, slowing the economic payoff of massive infrastructure bets.
Anthropic’s enterprise-first focus and multi-cloud distribution strategy are designed to preserve flexibility as the AI sector matures. As both Anthropic and OpenAI edge closer to public-market readiness, the coming years may reveal whether efficiency or brute-force scaling proves the more durable path in the AI arms race.




