GPU Ray Tracing Performance Comparisons [2021-2022]

But how do you know that will always be the case?



Software Ray Tracing is the only performant option in scenes with many overlapping instances, while Hardware Ray Tracing is the only way to achieve high quality mirror reflections on surfaces.

They removed that 100.000 number it seems. Or its in another guide, but yeah that's what Epic is saying. And some developers like the dev from that Rally demo disable it for this reason.
 
They shouldnt be the standard. A game with SSR is outdated, like using Linpack to determinate the performance of HPC accelerators.
Why do you assume that AAA games will just keep sticking to screen space techniques when there are other options outside of HW RT such as alternative world space/view independent solutions like SDFs, capsules, planar reflections, voxels, virtual shadow mapping etc. ?

Review outlets don't care about indie games which are purely showcases as tech demos or mods ...
Because BFV had a different frostbite branch than the rest of EA branches, it's always a mess with frostbite.

Anyways BF2042 uses RT AO. So the main engine branch got ray tracing again.
So I assume Need for Speed or the Dead Space remake will have this feature in their games ?
No, Northlight is powering Max Payne Remake and the next Alan Wake.

All engines have championed hardware ray tracing, even Crytek, so I don't know your point exactly. UE5 uses hardware ray tracing to render an entrie city.
I wonder if that's their last game before completely transitioning to UE5 ?

The existing implementation on CryEngine only works on one vendor while it uses the shoddy compute based paths for the others so it might as well be placed in the garbage bin when doing comparisons between other vendors ...
 
Why do you assume that AAA games will just keep sticking to screen space techniques when there are other options outside of HW RT such as alternative world space/view independent solutions like SDFs, capsules, planar reflections, voxels, virtual shadow mapping etc. ?
Because generally speaking it's a choice between performance and image quality, and if you need performance there's not much else to do but to just use the "previous gen" SS techniques - as all else you've listed will likely give you the same performance as "h/w RT" but with a worse image quality. UE5 demonstrates this already, with s/w Lumen not being any faster than h/w one whilst the latter providing a better overall IQ - better reflections specifically thus far.
 
But how do you know that will always be the case?

Brian Karis told it will always be the case if they stay with the current technology with kit bashing and overlapping meshes. In the future they think about changing Nanite for terrain rendering.
 
Last edited:
Because generally speaking it's a choice between performance and image quality, and if you need performance there's not much else to do but to just use the "previous gen" SS techniques - as all else you've listed will likely give you the same performance as "h/w RT" but with a worse image quality. UE5 demonstrates this already, with s/w Lumen not being any faster than h/w one whilst the latter providing a better overall IQ - better reflections specifically thus far.
Bold of you to assume that this holds true for all hardware when it might not be the case anymore as of recent or possibly in the future ...

Maybe on Nvidia the alternatives would net an inferior tradeoff but maybe on other vendors the alternative is both faster and higher quality than using HW RT ...
 
Well, Transparency Reflections are not a hardware RT feature, it works with software RT as well.

View attachment 7438

Obviously, HW-RT transparency reflections look better especially when objects are out of screen space (but unlike SSR without cube map fallback, there's still simplified geometry reflecting) but honestly, it gets the job done nicely in most cases. It's a huge win compared to SSR and people might find the differences between RT on and off (HW-Lumen vs SW-Lumen) even more negligible when Lumen gets used compared to games today where the comparisons are made to screen space reflections.

I was pretty disappointed to find out they work in SW-Lumen as well, I thought it was a HW-RT exclusive feature.
In SW Lumen they are mainly Screen space and do Not contain skinned objects or moving objects at all with colour infk. Huh? They are not the same
 
Bold of you to assume that this holds true for all hardware when it might not be the case anymore as of recent or possibly in the future ...
I mean if some h/w will have a purposefully crippled h/w RT while also providing a whopping (and actually reachable) TFLOPs figure then sure.
It would probably need to be about 10x faster in TFs than it's nearest competitor with a good h/w RT solution for that to be achievable though. Do you see something like that happening any time soon?
Because otherwise we're looking at some obscure h/w which isn't at all in line with where the h/w market in general is evolving - and I don't really see how such h/w would be able to end up in consoles even where perf/watt is key.
 
Why do you assume that AAA games will just keep sticking to screen space techniques when there are other options outside of HW RT such as alternative world space/view independent solutions like SDFs, capsules, planar reflections, voxels, virtual shadow mapping etc. ?
Because it doesnt make sense not to use Raytracing but going with same obscure technique to archive a similiar result. nVidia and Epic invested into the UE4 and now everyone can make games with Raytracing - GI, DI, Reflections etc. At the same time there are AAA games using SSRs to fake reflections and introducing massiv artefacts while having worse image quality than $10 indie games.
 
I mean if some h/w will have a purposefully crippled h/w RT while also providing a whopping (and actually reachable) TFLOPs figure then sure.
It would probably need to be about 10x faster in TFs than it's nearest competitor with a good h/w RT solution for that to be achievable though. Do you see something like that happening any time soon?
The industry doesn't have to play by a specific vendor's rules such as using their HW RT implementation. The industry largely makes it own rules and if they want to use alternatives because it suits their own target demographic better as opposed to a competing vendor then there is absolutely nothing they can do to stop them ...

It's the ISVs who are the ones largely funding these benchmarks here and all IHVs must play by these rules set out by them regardless if their proposal provides a superior end result to their competitor ...
Because otherwise we're looking at some obscure h/w which isn't at all in line with where the h/w market in general is evolving - and I don't really see how such h/w would be able to end up in consoles even where perf/watt is key.
AMD HW is far away from being "obscure" when two out of three console vendors use them as a basis for their platforms and they dominate the AAA game market. Again, it's the console vendors who will dictate the metrics and requirements that they want out of hardware vendors and not the other way around. You don't seem to understand that if you think Nvidia can somehow subsume control over the entire industry just because they have the leading edge HW RT implementation when the industry can just as easily decide to ruin them by releasing unfavourable benchmarks in their way ...
 
The industry doesn't have to play by a specific vendor's rules such as using their HW RT implementation. The industry largely makes it own rules and if they want to use alternatives because it suits their own target demographic better as opposed to a competing vendor then there is absolutely nothing they can do to stop them ...
"The industry" as in Nv, Intel and Microsoft are what has been stating these rules for RT h/w. The question is if some h/w vendor want to play by it's own rules instead of following the industry, not the other way around.

AMD HW is far away from being "obscure" when two out of three console vendors use them as a basis for their platforms and they dominate the AAA game market.
It's also nowhere near to being 10x faster in FLOPs for something which you're describing to make any sense on it. Notice that most engines even when they are used _without_ h/w RT but with advanced s/w approaches to similar problems (Northlight, UE5) run better on non-AMD h/w (and they run even better with h/w RT there).

What you're seem to suggest would happen don't have any ground to happen really. It is far more likely that AMD will improve their RT h/w eventually, just like they did with every previous h/w deficit in their GPUs (aniso, geometry and tessellation, thread latency, memory compression, etc).
 
"The industry" as in Nv, Intel and Microsoft are what has been stating these rules for RT h/w. The question is if some h/w vendor want to play by it's own rules instead of following the industry, not the other way around.
The industry consists of more than just NV, Intel, or Microsoft but it also consists of ISVs like game developers who makes the benchmarks ...

Without benchmarks, everyone else's reason for existence would become tenuous including their stated rules for HW RT ...
It's also nowhere near to being 10x faster in FLOPs for something which you're describing to make any sense on it. Notice that most engines even when they are used _without_ h/w RT but with advanced s/w approaches to similar problems (Northlight, UE5) run better on non-AMD h/w (and they run even better with h/w RT there).

What you're seem to suggest would happen don't have any ground to happen really. It is far more likely that AMD will improve their RT h/w eventually, just like they did with every previous h/w deficit in their GPUs (aniso, geometry and tessellation, thread latency, memory compression, etc).
The 10x number you touted is just some arbitrary figure with no meaning. Are these S/W solutions running better on non-AMD HW because of a legitimately superior implementation or is it because their competitors are crowding them out in brute force by increasing their die sizes and transistor counts significantly ? The latter would be a hollow victory ...

The ISVs will dictate what AMD will do next, if they believe they want alternatives for HW RT then nobody can stop them. Not even Nvidia no matter how much they want the HW market to evolve their way ...

We'll see soon enough whether or not HW RT will be able live or die without the presence of consoles ...
 
The ISVs will dictate what AMD will do next, if they believe they want alternatives for HW RT then nobody can stop them. Not even Nvidia no matter how much they want the HW market to evolve their way ...
You think NVIDIA invented RT in a bubble? They probed Microsoft, console makers, professionals, numerous studios and many engine makers, now all engine has DXR support, consoles have support, the whole industry has support.

You guys are unbelievable in your animosity towards hardware RT, a hall mark of real time graphics, just because AMD is behind for 3 generations straight, it's like you can't believe that NVIDIA still manages to be on top despite doing the effort for it, or that AMD is so damn incompetent, that they can't get something crucial as RT right for so freaking long.

Wake up, and get off your high horse, the indsutry doesn't operate in that shitty way you describe, and AMD has always had the tendency to catch up late in the game, they lost the battle with G-Sync, CUDA, AI, upscaling, VR, and so many other things, this RT crisis they have is not new. And the industry embraces successful solutions that get results done, or push the boundaries forward, those slacking off are left behind biting the dust.
 
Last edited:
You think NVIDIA invented RT in a bubble? They probed Microsoft, console makers, professionals, numerous studios and many engine makers, now all engine has DXR support, consoles have support, the whole industry has support.
The only other console vendor that's not Microsoft and targets other AAA game publishers is Sony of which I find the idea of Nvidia ever approaching them being laughable. The professional graphics market has different needs to the gaming market and they're held captive by Nvidia regardless seeing how widespread CUDA is over there. As far as "numerous studios and engine makers" is concerned we have one employee here from Epic Games who doesn't sound all that impressed by the bottleneck of maintaining the acceleration structures for ray tracing. Unity Technologies doesn't care about AAA content and are strictly focused on mobile platforms ...

Yes all engines indeed have DXR support but it's too bad you can't use it for any content like Valley of the Ancients or Lumen in the Land of Nanite. Then we have Activision Games backtracking in COD. It must be "whole industry support" alright to have a feature that incompatible with existing content ...
You guys are unbelievable for your animosity toward hardware RT, a hall mark of real time graphics, just because AMD is behind for 3 generations straight, it's like you can't believe that NVIDIA still manages to be on top despite doing the effort for it, or that AMD is so damn incompetent, that they can't get something crucial as RT right for so freaking long.
If HW RT is truly the "hall mark" of real time graphics then why are you so insistent about it's value ? Why not let benchmarks and new releases speak for themselves ?
Wake up, and get off your high horse, the indsutry doesn't operate in that shitty way you describe, and AMD has always had the tendency to catch up late in the game, they lost the battle with G-Sync, CUDA, AI, upscaling, VR, and so many other things, this RT crisis they have is not new. And the industry embraces successful solutions that get results done, or push the boundaries forward, those slacking off are left behind biting the dust.
Strange how you didn't mention PhysX and proclaim G-Sync to be some victory when nearly no new displays or anyone else has implemented the standard ...

"Embracing successful solutions" isn't some one-way ticket and the industry exists to make profit whether or not they push the boundaries forward ...
 
If HW RT is truly the "hall mark" of real time graphics then why are you so insistent about it's value ? Why not let benchmarks and new releases speak for themselves ?
I do precisely that, I let the benchmarks speak and post them here, and I post about new releases regularly, which happen at an accelerated pace I can't even keep with.

You are the one making strawman arguments and imaginary predictions based on imaginary tales, you are the one clinging to one game leaving ray tracing for two versions for reasons unknown, ignoring the dozens of games with ray tracing releasing every month. For all you know the next black ops could do ray tracing again .. Halo Infinite is doing ray tracing months after release. Developers chnage their mind back and forth all the time, one developer could make arrangement with NVIDIA for one game, then switch to AMD or Intel for the next. These are regular occurrences in the industry, you don't get to make sweeping statements about that, especially not in the face of the overwhelming evidence of industry wide adoption of DXR.
how you didn't mention PhysX
GPU particles are now an essential part of any modern game, working through DirectCompute.
to be some victory when nearly no new displays or anyone else has implemented the standard ...
Display makers raced to enroll under the badge, and now almost all displays are G-Sync compatible. G-Sync literally swallowed the whole market.
 
Last edited:
targets other AAA game publishers is Sony of which I find the idea of Nvidia ever approaching them being laughable.
Don't be naive, Sony and NVIDIA talk to each other all the time, their PC games are sponsored by NVIDIA since forever. Sony knows NVIDIA is the most important player in graphics, to not talk to them is stupid, and companies are not stupid, they search for porfits through mutual cooperation and mutual benefits. I mean you just stated that in your last paragraph ffs.

we have one employee here from Epic Games who doesn't sound all that impressed by the bottleneck of maintaining the acceleration structures for ray tracing
He is not responsible for Lumen nor ray tracing in UE4 or UE5, so he is far from being the authority on this subject.

Unity Technologies doesn't care about AAA content
Yet they rushed to support ray tracing and shipped it in games.

but it's too bad you can't use it for any content like Valley of the Ancients or Lumen in the Land of Nanite
These are demos not actual games, you are contradicting yourself here, you are placing two useless demos on a high pedastel while ignoring a 100 ray traced games, the Matrix demo (arguably best demo of all time), and dozens of path traced games that are about to be released, do you even hear yourself?
 
Last edited:
You think NVIDIA invented RT in a bubble? They probed Microsoft, console makers, professionals, numerous studios and many engine makers, now all engine has DXR support, consoles have support, the whole industry has support.
All the signs are that NVidia built the hardware and then told Microsoft what the API should be.

If there had been consultation, then RDNA 1 would have had ray tracing too. The IHVs normally proceed with new hardware support for major API features along a shared timeline so that the introduction of the API is met with broad support.
 
If there had been consultation, then RDNA 1 would have had ray tracing too.
Consoles were locked down even before RDNA1 got released and they had RT in them. DXR was announced during Volta not Turing by the way.

RDNA1 was a low effort by AMD, it didn't even have a high end card, and was not as power effecient as it should be (225w for 5700XT vs 250w for 2080Ti), so AMD couldn't actually do much with RT hardware on a 5700XT class hardware, that has relatively low clocks (compared to RDNA2), it would have been completely demolished by Turing and would made an embarrassement out of AMD. That's why the probably scrapped it all together and decided RDNA2 is the arch that could at least do some effort in ray tracing. So they released RDNA1 without RT, pretended RT doesn't exist and won't matter for many years, lied about that with a straight face through their PR statements, and chose to focus their propaganda on rasterization only.

Looking at RDNA3 now, AMD is either incapable of doing RT right, or unwilling to, even Intel (the new inexperienced guy) is doing much better than AMD on their first try.
 
Back
Top