GPU Ray Tracing Performance Comparisons [2021-2022]

Use a 6700XT. Same number of transistors as the RTX2080TI.
But half the memory bus, which in turns accounts for a lot of x-tors in the form of ∞$ as you're perfectly well aware.
No, sorry, if you want to shift the goaldposts around like that, I won't be wasting my time with this.

The 6800 has the full memory configuration as the other Navi20 GPUs.
That's what I wrote (apart from it not being Navi20, but 21), thanks for confirming it.
 
Navi21, GA104 and TU102 uses the same number of transistors. So it is a fair comparision from a technical standpoint. AMD has spent transistors on a bigger cache to increase efficiency in rasterizing games. nVidia spent transistors for more compute units (double FP32, improved RT Cores etc.).
 
Last edited:
Navi21, GA104 and TU102 uses the same number of transistors. So it is a fair comparision from a technical standpoint. AMD has spent transistors on a bigger cache to increase efficiency in rasterizing games. nVidia spent transistors for more compute units (double FP32, improved RT Cores etc.).
Not where we're coming from in this discussion.
If i may, the point I was debating was:
Turing is definitely more capable RT wise than RDNA2. Period. Current workloads (gaming/professional) are proof enough of that.
 
If i may, the point I was debating was:
I think that the whole "we can't use s/w made for Turing to assess RDNA2 RT performance" angle is quite a bit misaligned already.
There are many benchmarks now which weren't made for Turing.
There are AMD branded RDNA2 consoles ports which were definitely not made for Turing.
There are even production software results where RDNA2 can't beat Ampere despite using h/w RT acceleration while Ampere is not (at least I think it does? feel free to correct me on this one, hasn't looked into it much).
The data is piling and besides some anecdotal evidence none of it point to RDNA2's RT implementation being on par with even Turing. The relative "wins" it gets are often due to it having a *significant* raster advantage - we are looking at hybrid RT+raster renderers after all.
 
Pit them against eachother in full path/ray traced games like quake 2 rt version etc.
Q2RTX is an interesting one btw. Sure it's done by Nv mostly but it is (kinda) open source so nothing is stopping AMD from going into the code and optimizing the hell out of it for RDNA2. Why haven't they yet?
It's the same for supposedly helpful console RT optimizations which are absent from PC APIs - what's stopping AMD from porting them into AGS or a proprietary VK extension if they are so helpful?
 
I believe it is the correct metric speaking from a consumer point of view and also from being a cut-down of the 2nd-largest die in it's family.
Again I disagree vehemently, price is subject to it's own bubble, Turing was priced high because it had little to no competition vs the featureless RDNA1/Vega.

Then look at 6800 vs. 2080 Ti if you will, for which i provided an example as well. Or do you insist, that I use 6900 XT here, since it's also a 1000-EUR-claseigs product, as the 2080Ti has been
Several other outlets have shown the 6900XT to either be 10% faster than 2080Ti, or equal to it, be ware that Doom Eternal is really light on RT, restricting it to certain surfaces, a more heavy weight workload will make the 2080Ti shine more.

See the path traced Quake 2 or Minecraft for examples where the 2080Ti stomps all over the 6900XT. Or better yet see The Medium, or Call Of Duty old War where it enjoys a large advantage against 6900XT, I am not even listing the UE4 examples where the same thing happens.
 
Most of the current RT workloads are developed with a focus on what Nvidia hardware can and cannot do. So I think it's not (yet) the right time for such an absolute statement as your "uring is definitely more capable RT wise than RDNA2. Period."

For example Doom Eternal Benchmarks say otherwise. Don't be blinded by the better Ampere, RDNA2 can be quite competitive with Turing, even on a 550-vs-1100-comparision of RX 6800 vs. 2080 Ti.
(anecdotal) proof:
View attachment 5643
source: https://www.pcgameshardware.de/Doom...ng-RTX-Update-DLSS-Benchmarks-Review-1374898/

Since RX 6800 is quite faster than 2080 Ti in non-RT workloads, it's probably not showing a complete picture of their relative RT performance by simply comparing total rendering performance.
A better comparison (RT-wise) would be "how much more time each card spent on RT each frame." Unfortunately, there's no pure "RT off" benchmark in the review above, so it's difficult to make such comparison.
However, we know from other reviews that RX 6800 is slightly faster (< 10%) than 2080 TI with RT off at 4k in Doom Eternal, and quite a bit faster (~20%) at 1440p. So if RX 6800 has a similar performance to 2080 TI with RT on, it's probably due to its RT units not as fast in this game.
Or, using another example, it's quite likely that 3070's RT units are better than 2080 TI's RT units. However, in the benchmark above, 3070 (with a lower texture settings, likely due to memory size constraint) actually performs similar to a 2080 Ti. If we follow the same logic, we'd conclude that 3070 has a similarly performing RT units as 2080 TI, but that's likely to be wrong. However, if we consider that 3070 is actually slower (~ 6% at 1440p and ~10% at 4K, with comparable texture settings) than 2080 Ti with RT off in Doom Eternal, it makes more sense now.

[EDIT] There are also the "overlapping" factor to be considered, as some of the RT computations can be done concurrently with traditional rendering works, but we have even less information on this and it probably reasonable to assume the overlapping portion is more or less proportional to the non-overlapping portion.
 
Last edited:
It's that interesting dichotomy I've always found with respect to testing/comparing vs. playing games. In terms of playing and experiencing a game it tends to be more dynamic, but in terms of testing/comparing games (and by extension the hardware involved) everything tends to be more static.
 
Since RX 6800 is quite faster than 2080 Ti in non-RT workloads, it's probably not showing a complete picture of their relative RT performance by simply comparing total rendering performance.
A better comparison (RT-wise) would be "how much more time each card spent on RT each frame." Unfortunately, there's no pure "RT off" benchmark in the review above, so it's difficult to make such comparison.
I agree, but AFAIA we do not have this better set of data - only for Ampere/Turing there are some shots for utilization of different execution engines. But I'd love to see it nonetheless.
 
So if RX 6800 has a similar performance to 2080 TI with RT on, it's probably due to its RT units not as fast in this game.
In GameGPU's testing: the 6900XT is 10% faster than 2080Ti, and the 2080Ti is 10% faster than 6800 @4K.
https://gamegpu.com/action-/-fps-/-tps/doom-eternal-test-rtx

In Hardwareluxxtesting's testing the 6900XT is 10% faster than 3070, and the 3070is 10% faster than 6800 @4K, so same results.
https://www.hardwareluxx.de/index.p...-eternal-mit-raytracing-und-dlss-im-test.html

In KitGuru's tesing the 6900XT is equal to the 3070, while the 3070 is 25% faster than 6800 @4K.

The more RT is used in any given scene, the lesser the gap between the 3070/2080Ti and the 6900XT, the latter is a 50% more powerful GPU than either of them, yet it can't keep up once some moderate RT is activated.
 
All of this is moot though, because facts are that RTX hinders gameplay and is a novelty. Ironically, You can turn RTX off and you still get lighting in games... and you get 30% + more frames. I buy a new video card, for better performance, not less.

Secondly, comparing brands, isn't there a strait up DXR benchmark..? That work natively since both brands support DX12 U.
 
All of this is moot though, because facts are that RTX hinders gameplay and is a novelty. Ironically, You can turn RTX off and you still get lighting in games... and you get 30% + more frames. I buy a new video card, for better performance, not less.

Yeah every new graphical feature since the Atari 2600 has been a novelty.

Who needs realistic graphics when we could be playing Pong at 1,000,000fps!
 
Yeah every new graphical feature since the Atari 2600 has been a novelty.

Who needs realistic graphics when we could be playing Pong at 1,000,000fps!

Baked in lighting can be realistic.

I think the big draw for Ray Tracing in games, is for Developer ease... and time. They do not have to spend hours upon hours baking and re-baking. The end user may not notice RT so much in the future, but it will allow for my dynamic environments.
 
Yeah because in real life shadows and light sources never move or turn on and off. Super realistic.

do we really need to repeat the obvious. Of course baked lighting has limitations, his point is its still possible to make great looking games under those constraints. Not all games need to be fully dynamic all the time. Of course its good that games that need to also can, please don't state that other obvious point.

Side note: Quake was the first game to use lightmaps, and it already featured lights turning on and off.
 
do we really need to repeat the obvious. Of course baked lighting has limitations, his point is its still possible to make great looking games under those constraints. Not all games need to be fully dynamic all the time. Of course its good that games that need to also can, please don't state that other obvious point.

Side note: Quake was the first game to use lightmaps, and it already featured lights turning on and off.

He set the bar at "realism". Realism implies dynamic lighting, since you know that's how the real world works. Of course it's possible to make static scenes look awesome with lightmaps. But it's up for debate whether static anything in a game should be considered great looking by today's standards.
 
Back
Top