Skip to main content

Broadcom has signed a long-term agreement with Google to develop and supply future generations of custom artificial intelligence chips through 2031, strengthening collaboration in next-generation data center infrastructure.

The deal focuses on advancing Google’s tensor processing units (TPUs), which are designed to handle AI workloads more efficiently and serve as an alternative to GPUs from competitors like Nvidia.

In parallel, Broadcom also reached an agreement involving Anthropic, enabling the startup to access approximately 3.5 gigawatts of AI computing capacity powered by Google’s processors starting in 2027.

The agreements reflect surging demand for custom AI silicon as major technology firms seek greater control over performance, cost and scalability. Google has been positioning its TPUs as a core driver of cloud growth, aiming to demonstrate returns on its heavy AI investments.

Anthropic, whose Claude model has seen rapid adoption, continues to diversify its infrastructure stack, using hardware from multiple providers including Google TPUs, Amazon’s Trainium chips and Nvidia GPUs. Amazon remains its primary cloud partner.

The partnership underscores a broader shift in the AI semiconductor landscape, where hyperscalers are increasingly designing their own chips and forming strategic alliances to reduce dependence on dominant suppliers.