GPU Ray Tracing Performance Comparisons [2021-2022]

It probably takes a lot more time than people think to make a good RT h/w implementation. RDNA2/3 RT is "quick and cheap" solution to the fact that it would be impossible to compete without such h/w going forward (which also kinda self defeats the whole arguing on how RT h/w isn't needed at all). It is slow but takes almost no die space and thus is "good enough" to check a feature box. Nv has spent several years making their RT h/w for Turing and then more or less port the same h/w in Ampere and even Lovelace isn't really that different from Turing in this. And if I had to guess I'd say that RDNA4 is the most likely gen where AMD will improve their RT h/w above just extending what was done for RDNA2.

The problem for AMD though is that the longer they wait the worse the situation will become.
 
I do precisely that, I let the benchmarks speak and post them here, and I post about new releases regularly, which happen at an accelerated pace I can't even keep with.

You are the one making strawman arguments and imaginary predictions based on imaginary tales, you are the one clinging to one game leaving ray tracing for two versions for reasons unknown, ignoring the dozens of games with ray tracing releasing every month. For all you know the next black ops could do ray tracing again .. Halo Infinite is doing ray tracing months after release. Developers chnage their mind back and forth all the time, one developer could make arrangement with NVIDIA for one game, then switch to AMD or Intel for the next. These are regular occurrences in the industry, you don't get to make sweeping statements about that, especially not in the face of the overwhelming evidence of industry wide adoption of DXR.
The defensive position that you're taking up seems to indicate otherwise ...

We'll see soon enough either way with the next batch of games coming up regardless ...
GPU particles are now an essential part of any modern game, working through DirectCompute.
How do you reconcile with the fact virtually no developer is implementing Nvidia's physics library anymore ? It's not like they solely standardized DirectCompute either since the first HW implementation was released by AMD ...
Display makers raced to enroll under the badge, and now almost all displays are G-Sync compatible. G-Sync literally swallowed the whole market.
This is getting to be irrelevant to the thread at hand but winning on a brand basis is empty compared winning the technical standard itself. No other vendors or display manufacturers ever have had to implement the actual G-sync technical specifications itself while Nvidia has to have two separate implementation for adaptive refresh rate technology otherwise they risk obsoleting their own customers who bought into their proprietary technology just like how they burned them out on 3D Vision before ...

What do you think is more important in winning a technical standard ? The brand or the technical specification/implementation itself ? Think about how vapid it is to place so much importance on the naming over the technology ...
Don't be naive, Sony and NVIDIA talk to each other all the time, their PC games are sponsored by NVIDIA since forever. Sony knows NVIDIA is the most important player in graphics, to not talk to them is stupid, and companies are not stupid, they search for porfits through mutual cooperation and mutual benefits. I mean you just stated that in your last paragraph ffs.
I highly Nvidia kept in contact with what Sony has to say about hardware design ever since the start of last generation ...
He is not responsible for Lumen nor ray tracing in UE4 or UE5, so he is far from being the authority on this subject.
It's damning indictment how he's now working for a corporation that clearly goes against the interest of his own former employer (Intel) which is trying to push HW RT as much as possible yet their HW can't even run Nanite or Lumen ...

How anyone can not take what he has to say seriously is beyond me since he's played both sides (IHV and ISV) before ?
Yet they rushed to support ray tracing and shipped it in games.
It's funny how you might be pointing out that Unity's only RT game is a port of an iOS game which means the likes of Apple or others like ARM and Qualcomm have bigger voices at Unity Technologies compared to Nvidia all of which have been even more resistant to implementing HW RT than the discrete graphics hardware vendors ...

The first RT game might've even been the only one to have used Unity's HDRP pipeline as well which says a lot about how much of an afterthought Unity really thinks consoles and PCs are to them overall ...
These are demos not actual games, you are contradicting yourself here, you are placing two useless demos on a high pedastel while ignoring a 100 ray traced games, the Matrix demo (arguably best demo of all time), and dozens of path traced games that are about to be released, do you even hear yourself?
These two useless demos show that developers now have real capability in a real game engine to make content incompatible with HW RT!

How are we supposed to somehow trust that developers won't use Nanite to render terrain or foliage ? Epic Games is literally undermining the future of HW RT itself by opening the doors to the creation of incompatible content ...
 
Looking at RDNA3 now, AMD is either incapable of doing RT right, or unwilling to, even Intel (the new inexperienced guy) is doing much better than AMD on their first try.
Intel has plenty of exp with RT. And they have exp. with implementig stuff in HW too.

Becasue traversal code, utilizing existing intersection instructions, is just 20 lines or so, and we know AMD has tried all kinds of stackless, short stack, etc. variants for their Radeon Rayes stuff,
we can assume they indeed did not want to accelerate it further for now.

That's pretty obvious. But likely we should wait to see what their 'dedicated traversal' slide actually means. If they did implement traversal HW, and it still is that slow, that would give you a lot of ammo.
 
November 4, 2022
Boost your Forza Horizon 5 framerates on PC whilst maximising image quality by enabling NVIDIA DLSS Super Resolution or AMD FSR 2.2. These resolution upscaling features are coming in the Donut Media series update available to download from the Microsoft Store and Steam on Tuesday, November 8.


Q: What improvements are available for ray tracing in Forza Horizon 5 on PC?

We have added two new settings for ray tracing quality. Ultra and Extreme settings allow player car reflections to be rendered in Free Roam and Races.
This also allows ray tracing to be enabled in Photo Mode for both the player vehicle and AI cars.

Q: What are the differences between each of the ray tracing options in Forza Horizon 5?

The Medium and High settings function as before, with ray tracing exclusively rendering in Forzavista.
Meanwhile, the Ultra and Extreme options expand ray tracing to Photo Mode, Free Roam and Races.
Some ray tracing settings and scenarios will render reflections at half resolution, while others render at full resolution. Please refer to the table below for the breakdown on a per-setting, per-scenario basis:
 
The editors from PC Games Hardware took AMD's ray tracing benchmark results from the keynote and made some charts with it.

The CP2077 result is poor but the others aren't bad and up there with the 3090ti.
 

Attachments

  • CP2077.png
    CP2077.png
    87.4 KB · Views: 19
  • DL2.png
    DL2.png
    86.2 KB · Views: 19
  • MEEE.png
    MEEE.png
    86.9 KB · Views: 19
The editors from PC Games Hardware took AMD's ray tracing benchmark results from the keynote and made some charts with it.

The CP2077 result is poor but the others aren't bad and up there with the 3090ti.

Navi 31's massive ALU to everything else ratio may be crushing relative performance here. Navi 32 looks to cut the ratio by 25%, so it'll be interesting to see if relative RT performance goes up there.
 
Whether this is "not bad" or not will depend on results which Lovelace will show at the same price. 3090Ti itself isn't that much better than 3080 which is a 700 card from 2020.
If the 4070 is around the 3090 Ti in RT, then this generation would be similar to last, comparing the 6800 XT/6800 to the 3070.
 
Navi 31's massive ALU to everything else ratio may be crushing relative performance here. Navi 32 looks to cut the ratio by 25%, so it'll be interesting to see if relative RT performance goes up there.
Navi 32 allegedly has 30 WGPs (vs 48) and 2/3rds everything else. It's a slightly lower ratio, but only by 6.25%
 
The defensive position that you're taking up seems to indicate otherwise ...
And the offensive, anti RT position you are showing is indicative of otherwise too, funny these arguments are showing only when AMD is beaten time and again in RT, which indicates again that these fake arguments stem only from a defensive position of AMD's weak spot in RT.

We'll see soon enough either way with the next batch of games coming up regardless ...
Yep, just stick around this thread, and enjoy the RT ride.

How do you reconcile with the fact virtually no developer is implementing Nvidia's physics library anymore ?
PhysX was a precursor for GPU particles. Just like TXAA was a precursor for TAA.

What do you think is more important in winning a technical standard ? The brand or the technical specification/implementation itself ?
NVIDIA forced display makers to make quality displays, the point of the G-Sync module was never about making stuff exclusive to NVIDIA, it was to force displays to have some sort of quality design to deliver optimal variable refresh rate experience. FreeSync encouraged the spread of trash displays, and later had to adopt several tiers to distinguish good ones from bad ones. With G-Sync you knew your display is good. That's why display makers rushed to have the G-Sync badges, because having it meant the display is good. So you see, you are contradicting yourself again, what won here is the standard that promoted quality, not the standard that made a mess out of quality, and was made in a rush to steal headlines with no regards to quality.

If you care about proper implementations and specifications then you know it's the one that ended up winning with G-Sync.

which means the likes of Apple or others like ARM and Qualcomm have bigger voices at Unity
Who cares? Unity has RT, that's what matters. Also you are wrong, ARM GPUs are now RT capable. Only Apple is left behind.

These two useless demos show that developers now have real capability in a real game engine to make content incompatible with HW RT!
Two useless demos still, the real demo is the Matrix demo, you know .. the one where you walk, fly and drive around like an actual game. The one that actually supports HW-RT in a spectacular way. When Epic wanted to make a nex gen demo, they used this demo and they used HW-RT to pack the punch. And that's what the rest of the industry is doing.

How are we supposed to somehow trust that developers won't use Nanite to render terrain or foliage ? Epic Games is literally undermining the future of HW RT itself by opening the doors to the creation of incompatible content ...
UE55.1 RTX branch fully supports nanite and foliage with very good performance. UE5 is a constantly changing landscape .. you are naive if you think Epic will risk losing NVIDIA's long standing support. More RT features and enhancements will come, as Epic stated. So stick around, and watch them do it.
 
Consoles were locked down even before RDNA1 got released and they had RT in them. DXR was announced during Volta not Turing by the way.

RDNA1 was a low effort by AMD, it didn't even have a high end card, and was not as power effecient as it should be (225w for 5700XT vs 250w for 2080Ti), so AMD couldn't actually do much with RT hardware on a 5700XT class hardware, that has relatively low clocks (compared to RDNA2), it would have been completely demolished by Turing and would made an embarrassement out of AMD. That's why the probably scrapped it all together and decided RDNA2 is the arch that could at least do some effort in ray tracing. So they released RDNA1 without RT, pretended RT doesn't exist and won't matter for many years, lied about that with a straight face through their PR statements, and chose to focus their propaganda on rasterization only.

Looking at RDNA3 now, AMD is either incapable of doing RT right, or unwilling to, even Intel (the new inexperienced guy) is doing much better than AMD on their first try.
Quoted for comedy value.
 
If the 4070 is around the 3090 Ti in RT, then this generation would be similar to last, comparing the 6800 XT/6800 to the 3070.
Kind of, with the side effect of more games using RT now, which means that the landscape will be even less favorable to "rasterization wins".
 
Consoles were locked down even before RDNA1 got released and they had RT in them. DXR was announced during Volta not Turing by the way.

RDNA1 was a low effort by AMD, it didn't even have a high end card, and was not as power effecient as it should be (225w for 5700XT vs 250w for 2080Ti), so AMD couldn't actually do much with RT hardware on a 5700XT class hardware, that has relatively low clocks (compared to RDNA2), it would have been completely demolished by Turing and would made an embarrassement out of AMD. That's why the probably scrapped it all together and decided RDNA2 is the arch that could at least do some effort in ray tracing. So they released RDNA1 without RT, pretended RT doesn't exist and won't matter for many years, lied about that with a straight face through their PR statements, and chose to focus their propaganda on rasterization only.

Looking at RDNA3 now, AMD is either incapable of doing RT right, or unwilling to, even Intel (the new inexperienced guy) is doing much better than AMD on their first try.

AMD did make a version of RDNA1 that uses hardware RT.
 
These are demos not actual games, you are contradicting yourself here, you are placing two useless demos on a high pedastel while ignoring a 100 ray traced games, the Matrix demo (arguably best demo of all time),

Nah, that'd be RTX Racer. The gap between the two GPU's there would likely be fairly extreme given that demo takes advantage of all the new RTX 4xxx features, unlike every other comparison point thus far.
 
...most of Microsoft first party games won't...

In fact, there will be easily more games that don't use UE5 and Lumin than there will be games with.

Microsoft created another development studio purely to assist other Microsoft Studios with UE5. That's all that studio will be doing is helping other MS studios with UE5.

There's already a lot of MS studios (The Coalition, Ninja Theory, Undead Labs, InXile, Obsidian Entertainment, RARE, etc.) that have committed to using UE5 as well as rumors that others (like 343i) will be switching to UE5 as well.

Some MS studios might keep using their existing engine. iD Software, Bethesda, Turn 10 (includes Playground Games), Mojang, Arkane and maybe one or two others.

Of those only iD Software "might" push heavy RT. Although as an FPS maker whose entire reputation lives and dies upon the sword of fast and fluid gameplay with precise controls, I'm not sure they'll push really heavy RT.

While Mojang put out an impressive path tracing demo using Minecraft, they still haven't really done anything with it so I wouldn't hold my breath on RT suddenly becoming a big focus for them.

Turn 10 puts much more focus on physics than anything else due to them being focused on the racing genre where framerate and accurate physics are king. Thus Turn 10 and by association Playground Games are likely to continue to go light on RT. Bethesda isn't exactly known for pushing state of the art graphics and they are far more focused on tracking LOTS AND LOTS of objects in an open world. World's Edge (Age of Empires) isn't exactly going to be pushing the graphics envelope.

Basically most of Microsoft's internal studios have either started using UE5, are currently on UE4 and will be switching to UE5 or are on a proprietary engine and will be switching to UE5.

Likewise with most of the gaming industry. I'm already hearing about multiple game developers who are dropping their proprietary engine in order to start working in UE5 because the results are better than what they can accomplish with their own engine.

Regards,
SB
 
A game must not be "RT heavy" to give Nvidia an edge, "light RT" games do the same just as well, with 3070 generally beating 6800 in them.
Yes, I wasn't disagreeing with your claim that the landscape will be less favourable to AMD for having more RT games. I am just saying that the extra rasterization performance may help more in "light RT" titles. For example in Spiderman the 3090 Ti only looks to be about 20% faster than the 6950 XT at 4K W/RT, so 50% more performance on the AMD side would push it beyond.
 
For example in Spiderman the 3090 Ti only looks to be about 20% faster at 4K W/RT
20% faster how? In Spider-Man's heavy RT scenes, @4K resolution, the 3080 is 65% faster than 6800XT using max RT settings, and 45% faster using medium RT settings.


Even without heavy scenes, the 3080Ti is 45% faster than 6900XT at both 1440p and 2160p using max RT settings.
 
A game must not be "RT heavy" to give Nvidia an edge, "light RT" games do the same just as well, with 3070 generally beating 6800 in them.
It's looking like MSRP comparisons could shift from last gen though. 7800 will likely be cheaper than 4070 where as 6800 was dearer than 3070 at least for the effectively fictional MSRP. Lower pricing does appear to be AMD's answer to weak ray tracing this gen, which is disappointing. Hopefully next gen they can finally compete on all fronts but still keep Nvidia in check with more reasonable pricing.
 
Back
Top