Skip to main content

What is a teraflop?

So, you’re in the market for a new graphics card, but you’re stuck trying to figure out which option is right for you. On top of comparing specs, you’ve probably encountered the words “teraflop rating,” but maybe you’re not quite sure what it means.

A teraflop rating measures your GPU’s performance, and it’s often crucial when it comes to sifting through all the graphics cards. There’s a lot you should know about teraflop (TFLOP) measurements and ratings.

Recommended Videos

OK, what is a TFLOP?

armprocessor
Image used with permission by copyright holder

Unlike gigahertz (GHz), which measures a processor’s clock speed, TFLOP is a direct mathematical measurement of a computer’s performance.

Specifically, a teraflop refers to a processor’s capability to calculate one trillion floating-point operations per second. Saying something has “6 TFLOPS,” for example, means that its processor setup can handle 6 trillion floating-point calculations every second, on average.

Microsoft rates its Xbox Series X custom processor at 12 TFLOPs, meaning that the console can perform 12 trillion floating-point calculations each second. For comparison, the AMD Radeon Pro GPU inside Apple’s 16-inch MacBook Pro tops out at 4 teraflops, while the redesigned Mac Pro  (introduced in 2019) can reach up to 56 teraflops of power.

Do TFLOPs matter for gaming?

Microsoft / Microsoft

Microsoft recently revealed details about its Xbox Series X, stating that its graphics processor can be 12 teraflops of performance. That’s double the 6 teraflops on the Xbox One X! The company described this as a “true generational leap in processing and graphics.” And that’s mostly true — processor speed isn’t everything for game performance (look at what the PlayStation 5 is doing with new storage innovations, for example). Still, it is a core factor in how well games play and how many graphics and action calculations can be done at any given time.

All that added power will enable higher frame rates and resolution support, as well as hardware-accelerated ray tracing. This performance will be further augmented with Microsoft implementing a custom algorithm for variable-rate shading (VRS), which renders a scene at different details depending on where the focus is. That helps maximize performance where it’s most needed, making a game look fantastic without having to use all the system’s resources to do so.

What are floating-point calculations?

project scorpio
Image used with permission by copyright holder

Floating-point calculations are a common way of gauging the computational power of computers. In fact, once we started using FLOPs, it quickly became a common international standard for talking about computer prowess.

Floating-point, or “real” numbers, are a set of all numbers, including integers, numbers with decimal points, irrational numbers like pi, and so on. A floating-point calculation is any finite calculation that uses floating-point numbers, particularly decimals, from a computational standpoint. This is far more useful than looking at fixed-point calculations (which use only whole integers) because the work that computers do frequently involves finite floating-point numbers and all their real-world complications.

FLOPS measure how many equations involving floating-point numbers a processor can solve in one second. There is a lot of variance in the FLOPS that various devices need. A traditional calculator, for example, may need only around 10 FLOPS for all its operations. So, when we start talking about megaflops (a million floating-point calculations), gigaflops (a billion), and teraflops (a trillion), you can start to see what sort of power we’re talking about.

Manufacturers frequently include FLOPS as a specification on computers to talk about how fast they are across the board. However, if you have a custom-built machine and really want to brag about its teraflops, too, then there’s a pretty simple equation that you can use to figure it out:

Flops Calculation
Image used with permission by copyright holder

So, more TFLOPs mean faster devices and better graphics?

project-scorpio-6-1620x1080-640x0
Image used with permission by copyright holder

While this assumption is right in some cases, it’s not uncommon to see GPUs with higher teraflops that exhibit much lower performance. While this might seem strange, it’s quite similar to what we see with wattage. Your final performance depends on multiple factors.

Let’s use an analogy to further explain how variables can affect performance. Take a spotlight for example. A flashlight has many characteristics besides its wattage, and they play a big role in its capabilities.

Teraflops are just one factor to consider, in addition to core speed, processors, and frame buffers.

TFLOPS typically provide faster speeds and improved graphics. They can give intense speeds not seen in years past. Many devices couldn’t approach one TFLOP level; today, 56 is a standard unit.

The potential for supercomputers with over 100 petaflops (one petaflop is a thousand teraflops) is becoming a reality. The current record belongs to a supercomputer from Japan called Fugaku. Fugaku has 442 petaflops.

Topics
Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
PS5 vs. PC: Which is the better buy for gaming in 2024?
A PS5 standing on a table, with purple lights around it.

The PlayStation 5 has been around for several years now, and it's easier to find in stock than ever before. It's also built up an incredible roster of games, including Marvel's Spider-Man 2, Horizon Forbidden West, God of War: Ragnarök, and hundreds of cross-platform games. Toss in the fact that it's backward compatible with PS4 titles, and you'll have access to thousands of hours of gaming goodness on the new-gen console.

The same could also be said of PC, which is home to thousands of games and can easily be upgraded by installing new hardware instead of buying a whole new console. But if you're interested in gaming, should you buy a PS5 or PC in 2024? We already compared the PS5 to the Xbox Series X, so now we need to see how PC fits into the mix.

Read more
M4 vs. M3: How much better are Apple’s latest chips?
An official rendering of the Apple M4 chip.

Apple has begun outfitting its Macs with the M4 chip, following the chip’s debut in the iPad Pro in spring 2024. But not every Mac comes with the M4 -- several are still sporting the previous-generation M3, which offers impressive performance in its own right. These devices are expected to make the switch over the coming months.

That means there’s a split between M3 Macs and their M4 siblings, and the big question is whether you should upgrade. Is the M4 a large upgrade over the M3, or will you be fine sticking with the older chip? What sort of performance do the M3 and M4 offer, and how do they work under the hood? We’ve analyzed all the similarities and differences so that you know exactly what you should buy.
Where can you find these chips?

Read more
AMD Ryzen AI claimed to offer ‘up to 75% faster gaming’ than Intel
A render of the new Ryzen AI 300 chip on a gradient background.

AMD has just unveiled some internal benchmarks of its Ryzen AI 9 HX 370 processor. Although it's been a few months since the release of the Ryzen AI 300 series, AMD now compares its CPU to Intel's Lunar Lake, and the benchmarks are highly favorable for AMD's best processor for thin-and-light laptops. Let's check them out.

For starters, AMD compared the Ryzen AI 9 HX 370 to the Intel Core Ultra 7 258V. The AMD CPU comes with 12 cores (four Zen 5 and eight Zen 5c cores) and 24 threads, as well as 36MB of combined cache. The maximum clock speed tops out at 5.1GHz, and the CPU offers a configurable thermal design power (TDP) ranging from 15 watts to 54W. Meanwhile, the Intel chip sports eight cores (four performance cores and four efficiency cores), eight threads, a max frequency of 4.8GHz, 12MB of cache, and a TDP ranging from 17W to 37W. Both come with a neural processing unit (NPU), and AMD scores a win here too, as its NPU provides 50 trillion operations per second (TOPS), while Intel's sits at 47 TOPS. It's a small difference, though.

Read more