Nvidia’s keynote at GTC held some surprises

SAN JOSE — “I hope you realize this just isn’t a concert,” said Nvidia President Jensen Huang to an audience so large, it filled up the SAP Center in San Jose. That is how he introduced what is probably the exact opposite of a concert: the corporate’s GTC event. “You’ve gotten arrived at a developers conference. There might be numerous science describing algorithms, computer architecture, mathematics. I sense a really heavy weight within the room; unexpectedly, you’re within the flawed place.”

It might not have been a rock concert, however the the leather-jacket wearing 61-year old CEO of the world’s third-most-valuable company by market cap definitely had a good variety of fans within the audience. The corporate launched in 1993, with a mission to push general computing past its limits. “Accelerated computing” became the rallying cry for Nvidia: Wouldn’t or not it’s great to make chips and boards that were specialized, moderately than for a general purpose? Nvidia chips give graphics-hungry gamers the tools they needed to play games in higher resolution, with higher quality and better frame rates.

It just isn’t an enormous surprise, perhaps, that the Nvidia CEO drew parallels to a concert. The venue was, in a word, very concert-y. Image Credits: TechCrunch / Haje Kamps

Monday’s keynote was, in a way, a return to the corporate’s original mission. “I would like to point out you the soul of Nvidia, the soul of our company, on the intersection of computer graphics, physics and artificial intelligence, all intersecting inside a pc.”

Then, for the subsequent two hours, Huang did a rare thing: He nerded out. Hard. Anyone who had come to the keynote expecting him to tug a Tim Cook, with a slick, audience-focused keynote, was certain to be disillusioned. Overall, the keynote was tech-heavy, acronym-riddled, and unapologetically a developer conference.

We want greater GPUs

Graphics processing units (GPUs) is where Nvidia got its start. In the event you’ve ever built a pc, you’re probably pondering of a graphics card that goes in a PCI slot. That’s where the journey began, but we’ve come a good distance since then.

The corporate announced its brand-new Blackwell platform, which is an absolute monster. Huang says that the core of the processor was “pushing the boundaries of physics how big a chip could possibly be.” It uses combines the ability of two chips, offering speeds of 10 Tbps.

“I’m holding around $10 billion price of apparatus here,” Huang said, holding up a prototype of Blackwell. “The following one will cost $5 billion. Luckily for you all, it gets cheaper from there.” Putting a bunch of those chips together can crank out some truly impressive power.

The previous generation of AI-optimized GPU was called Hopper. Blackwell is between 2 and 30 times faster, depending on the way you measure it. Huang explained that it took 8,000 GPUs, 15 megawatts and 90 days to create the GPT-MoE-1.8T model. With the brand new system, you may use just 2,000 GPUs and use 25% of the ability.

These GPUs are pushing a implausible amount of knowledge around — which is a superb segue into one other topic Huang talked about.

What’s next

Nvidia rolled out a brand new set of tools for automakers working on self-driving cars. The corporate was already a serious player in robotics, nevertheless it doubled down with latest tools for roboticists to make their robots smarter.

The corporate also introduced Nvidia NIM, a software platform geared toward simplifying the deployment of AI models. NIM leverages Nvidia’s hardware as a foundation and goals to speed up corporations’ AI initiatives by providing an ecosystem of AI-ready containers. It supports models from various sources, including Nvidia, Google and Hugging Face, and integrates with platforms like Amazon SageMaker and Microsoft Azure AI. NIM will expand its capabilities over time, including tools for generative AI chatbots.

“Anything you may digitize: As long as there may be some structure where we are able to apply some patterns, means we are able to learn the patterns,” Huang said. “And if we are able to learn the patterns, we are able to understand the meaning. After we understand the meaning, we are able to generate it as well. And here we’re, within the generative AI revolution.”