Skip to main content

Nvidia said the market opportunity for its artificial intelligence chips could reach at least $1 trillion through 2027, as the company sharpened its focus on inference computing, the fast-growing segment centered on running AI systems in real time.

At its GTC developer conference in San Jose, chief executive Jensen Huang introduced a new CPU and an AI system built using technology from Groq, signaling Nvidia’s intent to compete more directly in inference workloads. While Nvidia has long dominated AI training, inference is becoming an increasingly critical battleground as companies move from building models to serving hundreds of millions of users.

Huang said demand for inference computing is accelerating as AI systems are deployed more broadly across consumer and enterprise use cases. Nvidia expects this shift to create a much larger commercial opportunity than previously projected, raising its estimate significantly from the forecast shared earlier this year.

The company also highlighted growing demand for standalone CPUs, reflecting how AI deployment is broadening beyond graphics processors alone. Nvidia’s roadmap now includes a wider mix of processors, networking technologies and integrated AI systems designed to support large-scale infrastructure.

The announcements are aimed at reassuring investors that Nvidia can maintain its leadership as AI spending evolves. Rather than relying only on training chips, the company is expanding deeper into inference, system design and autonomous AI agent infrastructure as the next phase of the AI market takes shape.