Nvidia CEO Jensen Huang said the company’s next generation of AI chips is now in full production, promising up to five times more AI computing performance than previous models when running chatbots and other AI applications.
Speaking at the Consumer Electronics Show in Las Vegas, Huang said the new Vera Rubin platform will debut later this year and is already being tested by AI companies in Nvidia’s labs. The platform combines six different Nvidia chips, with flagship systems packing dozens of GPUs and CPUs that can be linked into large-scale “pods” of more than 1,000 chips.
Huang said the performance gains come partly from a proprietary data format that Nvidia hopes the wider industry will adopt, allowing major efficiency improvements even with a relatively modest increase in transistor count.
While Nvidia continues to dominate AI model training, competition is intensifying in AI inference — the process of delivering AI services to users — from rivals like Advanced Micro Devices and major customers such as Google, which are developing their own chips.
Nvidia also unveiled new networking switches using co-packaged optics, a technology aimed at connecting thousands of machines more efficiently, competing with products from Broadcom and Cisco Systems.
Huang said Nvidia’s latest hardware is designed to outperform older chips such as the H200, which remains in high demand in China, as the company looks to defend its AI leadership amid growing geopolitical and market pressures.




