Skip to main content

Richly rendered ‘Asteroids’ demo showcases power of Nvidia’s RTX graphics

Nvidia has released a new demo to showcase some of the advanced graphics capabilities of the company’s Turing architecture found on the latest RTX series graphics cards, like the flagship GeForce RTX 2080 Ti. The public demo, called Asteroids, showcases the new mesh shading capabilities, which Nvidia claims will improve image quality and performance when rendering a large number of complex objects in scenes in a game.

Recommended Videos

With Turing, Nvidia introduced a new programmable geometric shading pipeline, transferring some of the heavy workload from the processor onto the GPU. The GPU then applies culling techniques to render objects — in the case of this demo, the objects are asteroids — with a high level of detail and image quality.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“Turing introduces a new programmable geometric shading pipeline built on task and mesh shaders,” Nvidia graphics software engineer Manuel Kraemer explained in a detailed blog post explaining the benefits of mesh shading on Turing. “These new shader types bring the advantages of the compute programming model to the graphics pipeline. Instead of processing a vertex or patch in each thread in the middle of fixed function pipeline, the new pipeline uses cooperative thread groups to generate compact meshes (meshlets) on the chip using application-defined rules.”

In the demo, Nvidia showed that each asteroid contains 10 levels of details. Objects are segmented into smaller meshlets, and Turing allows the meshlets to be rendered in parallel with more geometry while fetching less data overall. With Turing, the task shader is employed first to check the asteroid and its position in the scene to determine which level of detail, or LoD, to use. Sub-parts, or meshlets, are then tested by the mesh shade, and the remaining triangles are culled by the GPU hardware. Before the Turing hardware was introduced, the GPU would have to cull each triangle individually, which produced congestion on the CPU and the GPU.

“By combining together efficient GPU culling and LOD techniques, we decrease the number of triangles drawn by several orders of magnitude, retaining only those necessary to maintain a very high level of image fidelity,” Kraemer wrote. “The real-time drawn triangle counters can be seen in the lower corner of the screen. Mesh shaders make it possible to implement extremely efficient solutions that can be targeted specifically to the content being rendered.”

In addition to using this technique to create rich scenes in a game, Nvidia said that the process could also be used in scientific computing.

“This approach greatly improves the programmability of the geometry processing pipeline, enabling the implementation of advanced culling techniques, level-of-detail, or even completely procedural topology generation,” Nvidia said.

Developers can download the Asteroids demo through Nvidia’s developer portal, and the company also posted a video showing how mesh shader can improve rendering.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
25 years ago, Nvidia changed PCs forever
The GeForce 256 sitting next to a Half Life box.

Twenty-five years ago, Nvidia released the GeForce 256 and changed the face of PCs forever. It wasn't the first graphics card produced by Nvidia -- it was actually the sixth -- but it was the first that really put gaming at the center of Nvidia's lineup with GeForce branding, and it's the device that Nvidia coined the term "GPU" with.

Nvidia is celebrating the anniversary of the release, and rightfully so. We've come an unbelievable way from the GeForce 256 up to the RTX 4090, but Nvidia's first GPU wasn't met with much enthusiasm. The original release, which lines up with today's date, was for the GeForce 256 SDR, or single data rate. Later in 1999, Nvidia followed up with the GeForce 256 DDR, or dual data rate.

Read more
Nvidia may give the RTX 5080 a sweet consolation prize
The back of the Nvidia RTX 4080 Super graphics card.

Nvidia's best graphics cards are due for an update, and it seems that the RTX 5080 might get an unexpected boost with faster GDDR7 memory than even the flagship RTX 5090. That might be its sole consolation prize, though, because the gap between the two may turn out to be even bigger than in this generation.

First, the good news. Wccftech cites its own sources as it reports that the RTX 5080 will get 32Gbps memory modules from the get-go -- a significant upgrade over the RTX 5090 with its 28Gbps. The best part is that such a memory upgrade would bring the RTX 5080 to a whopping 1TB/s of total bandwidth, marking a huge improvement over the RTX 4080 Super, which maxes out at 736GB/s.

Read more
The RTX 5090 will reportedly require 600 watts of power
The back of the Nvidia RTX 4080 Super graphics card.

Rumors have been circulating about the RTX 5090 for a while, but we're finally getting a clearer picture of how Nvidia's flagship RTX 50-series GPU is shaping up. Well-known hardware leaker Kopite7kimi is claiming the RTX 5090, which seems destined to earn a spot among the best graphics cards, will come with an obscene power requirement of 600 watts.

The leaker shared specs for the RTX 5090 and RTX 5080 on X (formerly Twitter). We've seen vague claims from Kopite7kimi in the past concerning RTX 50-series graphics cards, but this is the first time the leaker has shared full specs. According to the leak, the RTX 5090 will require 600W of power and come with a staggering 32GB of GDDR7 memory across a 512-bit bus.

Read more