
THE NUMBER was designed to impress. At Nvidia's annual GTC conference in San Jose on Monday, Jensen Huang told a fervent crowd that the company's Blackwell and Rubin chip families would generate at least $1 trillion in cumulative revenue through the end of 2027. For a company that barely cleared $27 billion in annual sales three years ago, the figure borders on the absurd — the kind of projection that, in any other era of computing, would invite open ridicule.
Yet the market's response was telling. After an initial 4.8% pop, Nvidia's shares pared gains to close up just 1.6% at $183.19. The trillion-dollar headline extended a previous $500 billion forecast by one year, from 2026 to 2027. Investors, evidently, can do arithmetic. A company already on pace for the half-trillion mark does not need to grow at a blistering rate to reach double that figure with twelve extra months of runway. Nvidia's market capitalization sits at $4.4 trillion — still the world's largest — but the stock is down 3.4% year-to-date heading into GTC, a fact Huang's showmanship could not quite obscure.
Chip off the old Groq
But the real substance of the keynote lay not in the revenue forecast but in two product announcements that signal where Nvidia reckons its next frontiers are. The first is the Groq 3 LPU, a specialised inference chip born from Nvidia's quasi-acquisition of the startup Groq last December. While technically a licensing deal, the arrangement saw Groq's founders and a substantial portion of its engineering team decamped to Nvidia — an acqui-hire in everything but SEC filing. The LPU, or language processing unit, features fast on-chip memory that can generate text near-instantaneously, and Nvidia will offer it as a coprocessor alongside its GPU accelerators. Samsung will manufacture the silicon, with systems shipping in the second half of 2026.
The move is a tacit acknowledgment that inference — the process of running trained AI models, as opposed to training them — is becoming the dominant workload. Training requires brute computational force; inference rewards speed, efficiency, and cost per query. As AI applications proliferate from chatbots to autonomous agents, the ratio of inference to training compute is shifting decisively. Nvidia's GPU empire was built on the training side of that equation. The Groq integration hedges the bet.
The second announcement may prove more consequential still. Nvidia unveiled Vera, a general-purpose CPU, and plans to sell standalone computers built entirely around it. Huang called the CPU opportunity "for sure" a multibillion-dollar business — a phrase that, coming from a man in a leather jacket presiding over $4.4 trillion in market value, carries a certain understatement. Vera will combine attributes of data center, gaming, and laptop processors, promising lower power consumption and the ability to handle diverse workloads simultaneously.
Silicon sprawl
Still, the CPU push raises a question Nvidia has hitherto been able to sidestep: how many fights can one company pick at once? The standalone CPU market puts Nvidia in direct competition with Intel — whose server processor business, albeit diminished, remains formidable — as well as AMD, Amazon's Graviton lineup, and the growing ambitions of SoftBank's Arm Holdings. A recent agreement with Meta Platforms to supply standalone Nvidia CPUs signals the company's seriousness, but also the scale of the competitive response it is inviting.
The broader challenge is structural. As AI software matures, many operators are discovering that inference workloads can run adequately on cheaper, less power-hungry CPUs rather than Nvidia's premium accelerators. Every CPU Nvidia sells may, paradoxically, cannibalise its GPU margins. And the company's own customers — hyperscalers like Amazon, Google, and Meta — continue to invest heavily in in-house chip designs, motivated precisely by the desire to reduce their dependence on a single supplier whose gross margins routinely exceed 70%.
Nvidia's response has been to accelerate relentlessly, replacing its entire product lineup annually and layering on networking hardware, software, and open-source AI models. The strategy is less chip company than platform play — an attempt to make the Nvidia ecosystem so comprehensive that switching costs become prohibitive.
Whether that flywheel can sustain itself through 2027 depends on a variable even Huang cannot forecast with confidence: how quickly the trillion-dollar wave of AI infrastructure spending translates into actual revenue for the companies writing the cheques. A trillion dollars in chip sales is only as durable as the business cases those chips are meant to serve. Nvidia has built the picks and shovels for a gold rush. The question, as always, is how much gold is actually in the ground. ■
For more, join 75,000 subscribers getting tech's favorite brief here
