Skip to main content

Nvidia reveals first-ever CPU and Hopper GPU at GTC 2022

Nvidia CEO Jensen Huang kicked off the company’s Graphics Technology Conference (GTC) with a keynote speech full of announcements. The key reveals include Nvidia’s first-ever discrete CPU, named Grace, as well as its next-generation Hopper architecture, which will arrive later in 2022.

The Grace CPU Superchip is Nvidia’s first discrete CPU ever, but it won’t be at the heart of your next gaming PC. Nvidia announced the Grace CPU in 2021, but this Superchip, as Nvidia calls it, is something new. It puts together two Grace CPUs, similar to Apple’s M1 Ultra, connected through Nvidia’s NVLink technology.

A rendering of Nvidia's Grace Superchip.
Image used with permission by copyright holder

Unlike the M1 Ultra, however, the Grace Superchip isn’t built for general performance. The 144-core GPU is built for A.I., data science, and applications with high memory requirements. The CPU still uses ARM cores, despite Nvidia’s abandoned $40 billion bid to purchase the company.

Recommended Videos

In addition to the Grace Superchip, Nvidia showed off its next generation Hopper architecture. This isn’t the architecture powering the RTX 4080, according to speculation. Instead, it’s built for Nvidia’s data center accelerators. Nvidia is debuting the architecture in the H100 GPU, which will replace Nvidia’s previous A100.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Nvidia calls the H100 the “world’s most advanced chip.” It’s built using chipmaker TSMC’s N4 manufacturing process, packing in a staggering 80 billion transistors. As if that wasn’t enough, it’s also the first GPU to support PCIe 5.0 and HBM3 memory. Nvidia says just 20 H100 GPUs can “sustain the equivalent of the entire world’s internet traffic,” showing the power of PCIe 5.0 and HBM3.

Customers will be able to access the GPU through Nvidia’s fourth-generation DGX servers, which combine eight H100 GPUs and 640GB of HBM3 memory. These machines, according to Nvidia, provide 32 petaFLOPs of A.I. performance, which is six times as much as last-gen’s A100.

If the DGX doesn’t offer enough power, Nvidia is also offering its DGX H100 SuperPod. This builds on Nvidia renting out its SuperPod accelerators last year, allowing those without the budget for massive data centers to harness the power of A.I. This machine combines 32 DGX H100 systems, delivering a massive 20TB of HBM3 memory and 1 exoFLOP of A.I. performance.

Nvidia Hopper GPU family.
Image used with permission by copyright holder

Nvidia is debuting the new architecture with its own EOS supercomputer, which includes 18 DGX H100 SuperPods for a total of 4,608 H100 GPUs. Enabling this system is Nvidia’s fourth generation of NVLink, which provides a high bandwidth interconnect between massive clusters of GPUs.

As the number of GPUs scales up, Nvidia showed that the last-gen A100 would flatline. Hopper and fourth-gen NVLink don’t have that problem, according to the company. As the number of GPUs scales into the thousands, Nvidia says H100-based systems can provide up to nine times faster A.I. training than A100-based systems.

This next-gen architecture provides “game-changing performance benefits,” according to Nvidia. Although exciting for the world of A.I. and high-performance computing, we’re still eagerly awaiting announcements around Nvidia’s next-gen RTX 4080, which is rumored to launch later this year.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Nvidia’s supercomputer may bring on a new era of ChatGPT
Nvidia's CEO showing off the company's Grace Hopper computer.

Nvidia has just announced a new supercomputer that may change the future of AI. The DGX GH200, equipped with nearly 500 times more memory than the systems we're familiar with now, will soon fall into the hands of Google, Meta, and Microsoft.

The goal? Revolutionizing generative AI, recommender systems, and data processing on a scale we've never seen before. Are language models like GPT going to benefit, and what will that mean for regular users?

Read more
You should be using these 5 forgotten Nvidia GPU features
A hand grabbing MSI's RTX 4090 Suprim X.

Nvidia makes some of the best graphics cards you can buy, but the company says it actually spends about 80% of its time working on software. That includes a wide-ranging list of features on Nvidia GPUs to expand what your graphics card is capable of.

You may already know about some of these features, but in my experience, the software for both Nvidia and AMD graphics cards is woefully underused. If you have an Nvidia GPU, keep these features in your back pocket.
Instant Replay

Read more
Nvidia’s new Guardrails tool fixes the biggest problem with AI chatbots
Bing Chat saying it wants to be human.

Nvidia is introducing its new NeMo Guardrails tool for AI developers, and it promises to make AI chatbots like ChatGPT just a little less insane. The open-source software is available to developers now, and it focuses on three areas to make AI chatbots more useful and less unsettling.

The tool sits between the user and the Large Language Model (LLM) they're interacting with. It's a safety for chatbots, intercepting responses before they ever reach the language model to either stop the model from responding or to give it specific instructions about how to respond.

Read more