GPU Ray Tracing Performance Comparisons [2021-2022]

Warhammer got an update today. Runs a little bit better. Performance drop with Raytracing is in the normal range for GI and reflections - around 40%.
That makes more sense. If I remember, Control and Cyberpunk are in the 40-45% range with all RT effects at 4K.

Witcher 3 and Callisto Protocol are in the 65-70% range which is a bit crazy.
 
Warhammer got an update today. Runs a little bit better. Performance drop with Raytracing is in the normal range for GI and reflections - around 40%.
That's good to hear. Someone should test the performance impact on a Turing/Ampere GPU (on low settings and high).
 
Last edited:
Witcher 3 with just RTXGI on is in the over 200% range. From 119 fps (GPU limit) to 28 fps.

Yee gods, that bad? That's way worse than meh code and not giving a damn about optimization, guess CDPR "just had to" hit its ship date before the holidays.

Considering Warhammer looks like it runs the same SDKs better maybe there'll be a patch like, in January considering it's the holidays.
 
Yes but it's unclear if the quality is the same. Given how expensive it is on PC i don't see how it could be.

If the consoles are running at a lower quality level and it's completely destroying performance on PC without the option to turn quality down, then yes I agree that goes beyond simple poor optimisation.

If the quality level is the same, then that's even worse.

Can't wait for Alex's video.
 
Witcher 3 with just RTXGI on is in the over 200% range. From 119 fps (GPU limit) to 28 fps.
The Witcher 3 doesnt have any kind of advanced GI system. So the performance drop will always be much bigger than in others games: https://imgsli.com/MTQwMzcx

Is there any noticeable change to the visuals? I wonder if they just reduced the RT quality (ray count/resolution) as opposed to making what they had run better. Any idea if the frame drops improved? I was originally using RT on low for GI and Reflections and it managed to stay around 60 fps with small hordes but once a big one came the framerate dropped heavily and was basically unplayable.
I dont see a difference. But the RTGI hasnt this huge impact at all.
 
That's good to hear. Someone should test the performance impact on a Turing/Ampere GPU (on low settings and high).
I've got a 3080 and just went and had a quick look, these were done at 1440p with dlss on quality (so 1080p render res). RT off was getting 120 in the hub area looking across the round room back down towards the spawn area. RT reflections on low and RTGI on low that dropped to 80ish, RTGI changed to high it dropped to 70. I didn't get to try Reflections set to high aswell because it crashed and I don't have enough time at the moment to jump in and try again and also actually get into a mission and see if it's actually playable (it used to drop to 30s on me).

I did quickly run around the circular room that Alex used to show the stutters in his video and it seemed improved, I saw a couple of frame time spikes but they were not like the ones in Alex's video and I think they were more to do with people loading into the hub. That was with RT on low reflections and low GI.

I'll try do a better job later tonight in a mission, I used to get about ~70 in the hub with RT low/low before this patch so there has been an improvement not sure if it will be enough to keep it playable in mission for me though.
 
I've got a 3080 and just went and had a quick look, these were done at 1440p with dlss on quality (so 1080p render res). RT off was getting 120 in the hub area looking across the round room back down towards the spawn area. RT reflections on low and RTGI on low that dropped to 80ish, RTGI changed to high it dropped to 70. I didn't get to try Reflections set to high aswell because it crashed and I don't have enough time at the moment to jump in and try again and also actually get into a mission and see if it's actually playable (it used to drop to 30s on me).

I did quickly run around the circular room that Alex used to show the stutters in his video and it seemed improved, I saw a couple of frame time spikes but they were not like the ones in Alex's video and I think they were more to do with people loading into the hub. That was with RT on low reflections and low GI.

I'll try do a better job later tonight in a mission, I used to get about ~70 in the hub with RT low/low before this patch so there has been an improvement not sure if it will be enough to keep it playable in mission for me though.
Thank you for these tests! What I am interested in is the impact just RTXGI has without reflections on the GPU (preferably on RTXGI on low like you tested!). Also, were the 120 fps without RTXGI in the hub area in GPU or CPU limit? (If its in CPU limit, your GPU usage will be below 90-100%)
The Witcher 3 doesnt have any kind of advanced GI system. So the performance drop will always be much bigger than in others games: https://imgsli.com/MTQwMzcx
That's actually a good point. In a game without dynamic GI system, the performance impact from adding RTXGI would be more drastic compared to a game that already has a demanding GI system in place, if RTXGI replaces it entirely. Eitherway, it still runs super poorly with RTXGI even without considering how the game ran before turning RTXGI on.
 
Thank you for these tests! What I am interested in is the impact just RTXGI has without reflections on the GPU (preferably on RTXGI on low like you tested!). Also, were the 120 fps without RTXGI in the hub area in GPU or CPU limit? (If its in CPU limit, your GPU usage will be below 90-100%)

That's actually a good point. In a game without dynamic GI system, the performance impact from adding RTXGI would be more drastic compared to a game that already has a demanding GI system in place, if RTXGI replaces it entirely. Eitherway, it still runs super poorly with RTXGI even without considering how the game ran before turning RTXGI on.
I actually didn't pay attention to the gpu or cpu usage, it was sitting in the overlay but I was kinda rushing. Should have time in a few hours to do a better job and will do it without RT reflections and also make not of what the rest of my settings are and also get some framerates from in an actual mission. But will get the hub area with no/low/high GI so you can just see the RTGI perf impact.
 
Please focus on technical comparisons of performance and technical investigations into causes of poor performance. No conclusion should be drawn until the situation is fully understood.
 
Correct me if I'm wrong. But isn't that jump kind of insane? Not even 1 fps to 6 fps average?! That's more than 10 times the RT performance between architectures
Portal performance is dictated mostly by registers, not rt performance, at least on AMD if the early estimates are to be believed.
 
Correct me if I'm wrong. But isn't that jump kind of insane? Not even 1 fps to 6 fps average?! That's more than 10 times the RT performance between architectures
It's insane indeed and not only for AMD, but for all architectures. The 4090 is 4.5x times the 3090Ti, while the 3090Ti itself is ~2x times the 2080Ti. I think it's natural the 7900XTX is the way it is here. And according to PCGH and their usual analysis for RT heavy hitters, the 7900XTX did reach almost 2X over the 6950XT in two titles (Metro and Dying Light 2). Here is their full analysis.

In Path Traced games (Quake 2 RTX, Minecraft RTX) and Intense RT games (Cyberpunk 2077):
The 7900XTX is about 50% faster than 6950XT.
The 4090 is between 2.2x times and 2.5x times faster than 7900XTX, while the 4080 is between 60% to 80%.
The 3090Ti is between 40% to 50% faster than 7900XTX.

In Lego:
The 7900XTX increases it's distance to 60% faster than 6950XT
While 4090 is 95% faster than 7900XTX, 4080 is 25% faster, 3090Ti is 5% faster.

In Metro Exodus:
The 7900XTX is upping the game with an 80% faster result than 6950XT
The 4090 is 90% faster than 7900XTX, 4080 is 38% faster, 3090Ti is 8% faster.

In Dying Light 2:
The 7900XTX provided it's highest results yet: 90% faster than 6950XT
The 4090 is 2x times faster than 7900XTX, while the 4080 is 35% faster. The 3090Ti is 12% faster

In Control and Ghostwire:
The 7900XTX returns to being 50% faster than 6950XT
The 4090 is about 70% faster, while the 4080 is about 20% faster, 3090Ti is 5% faster.

Lastly Hitman3:
The 7900XTX is 60% faster than 6950XT, it's even 13% faster than 3090Ti!
The 4090 is only 67% faster than 7900XTX, while the 4080 is a meager 13% faster.


In Summary, the 7900XTX truly excels in two games: Metro and Dying Light 2, providing better than 80% uplift over the 6950XT, in most other games (even heavy RT ones), the uplift is between 50% to 60%.

This is collaborated by the other extensive RT investigation made by comptoir-hardware, where the average uplift is indeed ~50%, the uplift in Dying Light 2 and Metro is only 50% too,
the highest uplift in their analysis is 65% over 6950XT in Hitman 3, so I guess it's area dependent, and rely on the composition of the scene, and it's not related to how much or how little RT is, because even in path traced games, the uplift remains at ~50%.

 
The bad press around Witcher 3 performance is eroding whatever good will CDPR received for doing the free update in the first place. People are also claiming the non-RT visuals have been degraded though I haven’t seen any actual proof of that.

PurePC.pl has tested the Witcher 3 NG - https://www.purepc.pl/wiedzmin-3-dz...wymagania-sprzetowe-z-ray-tracingiem?page=0,7:
In 1440p the 4090 is twice as fast as the 7900XTX, 4080 is 46% faster. In 1080p the 7900XTX 42% faster than the 6900XT, which is less than the performance increase with Uber+ and w/o raytracing (57%).

This review has some useful numbers. The performance hit of RT on the 4090 increases with resolution. This implies that something besides RTXGI is very heavy. The way RTXGI works the cost per frame is mostly independent of resolution. It should therefore be a smaller fraction of total frame time as other things get more expensive at higher resolutions.

We need similar numbers but for each RT effect separately.
 
That's a 76% performance drop though. A 200% drop would result in a negative frame rate lol.
Yeah, it's a stupid world we live in where even manufacturers regularily use wrong terms like that, like "power consumption dropped 2.5 times lower". No it didn't. It would be producing power.
 
Correct me if I'm wrong. But isn't that jump kind of insane? Not even 1 fps to 6 fps average?! That's more than 10 times the RT performance between architectures
When the fps is that low, it doesn't mean much. The most shocking part is the 3090 Ti at 10fps and the 4090 at 45.
 
I still don't get why people are comparing the £1500+ RTX4090 to the 7900XTX.

We know the 4090 is a monster but it's a stupid comparison, the comparison we should all be making is with the RTX4080.

The 4090 even makes the 4080 look stupid.
 
Last edited:
Back
Top