Skip to main content

First Intel Arc graphics card has two key features to compete with Nvidia and AMD

Intel finally pulled back the curtain on its first graphics card and the Intel Arc brand, which it will use for all future enthusiast graphics endeavors. The first generation of cards is code-named Alchemist and is set to launch in the beginning of 2022. Outside of the fact that the cards exist, Intel revealed that they have two key features to compete with AMD and Nvidia.

The first card, formally known as DG2, will support real-time ray tracing and “A.I. accelerated super sampling.” We’re not sure how the supersampling works at this point, but it sounds a lot like Nvidia’s Deep Learning Super Sampling (DLSS) technology. Nvidia’s tech works by training an A.I. model with game images and then employing the supersampling algorithm on RTX graphics cards with dedicated Tensor cores.

Concept art of an Intel DG2 graphics card.
Image used with permission by copyright holder

We’re not sure how Intel’s implementation works yet. We reached out to see it would work similarly to DLSS, but Intel politely declined to comment. Regardless, the combination of both features puts some heat on Nvidia and a lot of heat on AMD. AMD released a DLSS alternative in the form of FidelityFX Super Resolution (FSR), but it doesn’t use A.I. to enhance the image like DLSS does.

Recommended Videos

Both features show that Intel is willing to compete in the discrete graphics card market. Ray tracing has become the new standard for big-budget AAA video games, and because it’s so demanding on hardware, a supersampling feature like DLSS is nessacary to run games at playable frame rates.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

It looks like Intel understands that. Just last week, Intel announced that it hired the architect behind real-time ray tracing and DLSS at Nvidia. Intel hasn’t revealed how its feature works yet, but that’s a good sign that it’s more similar to DLSS than FSR is.

The future of Intel graphics

Outside of those two key features, Intel didn’t reveal much about its upcoming card. The video announcement showed a montage of games running on pre-production silicon, including Forza Horizon 4, Psychonauts 2, and Metro Exodus. Unfortunately, the gameplay demos didn’t show frame rates, and an Intel rep told us that the company isn’t ready to make performance claims at this time.

Leaked benchmarks showed a pre-production unit hitting clock speeds of 2,200MHz, which is faster than most consumer graphics cards. That doesn’t reveal much about performance, but it shows that Intel may be able to push its Xe HPG architecture pretty far.

All future enthusiast graphics endeavors will live under the Intel Arc name, including hardware, software, and services. The Arc name comes from the fact that “every game, gamer, and creator has a story, and every story has an arc.” Queue the Curb Your Enthusiasm theme.

Intel Arc logo.
Image used with permission by copyright holder

The name would probably be better off without the explanation, but it’s clear Intel is all-in on the gamer aesthetic with this brand. Intel announced three upcoming generations of cards, code-named Battlemage, Celestial, and Druid. For Alchemist, Intel said it will reveal more details later this year, and that cards will show up in desktops and laptops in the first part of 2022.

First-generation cards some with full support for DirectX 12 Ultimate, meaning that they’re capable of features like DirectX Raytracing and variable rate shading. Again, Intel isn’t ready to reveal further details on this front, but we’ll keep this story updated if we hear anything else.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Nvidia’s most popular graphics card just bit the dust
The RTX 3060 installed in a computer.

Nvidia is reportedly discontinuing the RTX 3060, which is easily one of the best graphics cards Nvidia has released in the past few years. The GPU is now over three years old, and Nvidia has apparently sent a notice to its board partners that the next order for these cards will be the last the company sends out.

The notice was posted on Board Channels, which is a forum where board partners discuss the internal movements of companies like Nvidia and AMD. Although Nvidia hasn't confirmed that the RTX 3060 is being discontinued, it would make sense. The card was originally released in February 2021, and sales have likely declined in the face of newer cards like Nvidia's own RTX 4060 and competitors like the Intel Arc A750.

Read more
AMD’s new feature doubled my frame rate with a single click
RX 7900 XTX installed in a test bench.

AMD did exactly what I hoped it would do. Its Fluid Motion Frames feature, referred to as AFMF, originally promised a way to add frame generation to virtually any game. There was just one problem -- AFMF was bad. Really bad. Now, AMD is taking another swing at driver-level frame generation with AFMF 2, which works in any game for any of AMD's RX 6000 or RX 7000 graphics cards.

The new version takes a lot of cues from Lossless Scaling, a $7 Steam app that has catapulted in popularity over the past few months due to its ability to add frame generation to any game. AMD is now able to provide a similar level of quality, and with some clear upsides over Lossless Scaling if you own one of AMD's best graphics cards.
What's new here?

Read more
AMD just revealed a game-changing feature for your graphics card
AMD logo on the RX 7800 XT graphics card.

AMD is set to reveal a research paper about its technique for neural texture block compression at the Eurographics Symposium on Rendering (EGSR) next week. It sounds like some technobabble, but the idea behind neural compression is pretty simple. AMD says it's using a neural network to compress the massive textures in games, which cuts down on both the download size of a game and its demands on your graphics card.

We've heard about similar tech before. Nvidia introduced a paper on Neural Texture Compression last year, and Intel followed up with a paper of its own that proposed an AI-driven level of detail (LoD) technique that could make models look more realistic from farther away. Nvidia's claims about Neural Texture Compression are particularly impressive, with the paper asserting that the technique can store 16 times the data in the same amount of space as traditional block-based compression.

Read more