GPU Ray Tracing Performance Comparisons [2021-2022]

AMD has spent transistors on a bigger cache to increase efficiency in rasterizing games. nVidia spent transistors for more compute units (double FP32, improved RT Cores etc.).
A bigger cache is what RT needs to help with it's random access problem - it's not a 'rasterization' feature. And doubling fp32 and improving RT cores does not give us more compute units.
My impression is 'NV invests in specialization, AMD invests in generalization.', which maybe is contrary to yours.

All of this is moot though, because facts are that RTX hinders gameplay and is a novelty. Ironically, You can turn RTX off and you still get lighting in games... and you get 30% + more frames. I buy a new video card, for better performance, not less.
uuuhhh... this really does not sound popular ;) I'd add to this: Even with turning RTX on, lighting still is not dynamic in very most titles. So the arguments that followed are more about promised advantage than what we got for real.
 
He set the bar at "realism". Realism implies dynamic lighting, since you know that's how the real world works. Of course it's possible to make static scenes look awesome with lightmaps. But it's up for debate whether static anything in a game should be considered great looking by today's standards.

I think a good many of you are not being objective.

We have lights that cast shadows and move.. that are not ray traced. We do not need ray traced light, to have realistic lighting. Again, what RAY TRACING allows is for Game Developers to make games EASIER... and not have to spend so much time with baked-in lighting.

Ray tracing doesn't add to the realism of games... because not all lights in games are traced...! (Leaving gaps in the realism).



Today (2021), Ray tracing and ray traced reflections are just a novelty. They do not do anything for the Gamer, except sap performance...
 
We do not need ray traced light, to have realistic lighting.

It is objectively true that RT produces the most accurate lighting in games. So you must mean that you're happy with the limitations of alternative methods (SSR, irradiance fields, Lumen). That's fine but people have different standards.

Ray tracing doesn't add to the realism of games... because not all lights in games are traced...! (Leaving gaps in the realism). Today (2021), Ray tracing and ray traced reflections are just a novelty.

So ray tracing will only be useful once everything is raytraced? And we should be happy with lesser methods until then?
 
It is objectively true that RT produces the most accurate lighting in games. So you must mean that you're happy with the limitations of alternative methods (SSR, irradiance fields, Lumen). That's fine but people have different standards.

So ray tracing will only be useful once everything is raytraced? And we should be happy with lesser methods until then?


It is objectively true, that Ray Tracing saps considerable performance... and not all the lighting in those games is being ray-traced, so even those game are NOT realistic....!

End of story....



BTW, I am happy with the PERFORMANCE that non-RT lighting gives me....
 
The DF video on Doom shows how a combination of RT and SSR currently produces results that either could not achieve on their own. There will be a lot of gray area and lack of absolute truths for the foreseeable future. [emoji16]
 
It is objectively true, that Ray Tracing saps considerable performance... and not all the lighting in those games is being ray-traced, so even those game are NOT realistic....!

End of story....

BTW, I am happy with the PERFORMANCE that non-RT lighting gives me....

It's fantastic that you're satisfied with current lighting in games. We should all be so lucky :)

The DF video on Doom shows how a combination of RT and SSR currently produces results that either could not achieve on their own. There will be a lot of gray area and lack of absolute truths for the foreseeable future. [emoji16]

Is the need for SSR due to the relatively large steps used in ray marching or something more fundamental limitation of RT?
 
Ray tracing doesn't add to the realism of games... because not all lights in games are traced...! (Leaving gaps in the realism).
The same argument would imply that pre-computed lighting doesn't add to the realism of games either – since it doesn't capture the contribution of moving objects to the propagation of light. So why waste time with all that pre-computation in the first place? Just have flat, artificial, lighting on everything. And why bother trying to shadow all these dynamic light sources if they already looked ok without shadows in 90's games?
 
It's a reductio ad absurdum. The point is that it's ridiculous to think that just because a technique doesn't achieve complete realism in a given scene, it can't add to the realism of that scene. That's literally the history of computer graphics – a series of ever more realistic approximations.

So now in Spider-Man where you can see the world moving in the glass you are climbing up, instead of a crude static approximation via cube maps, do we think "oh, this isn't any more realistic at all, because there is some barely perceptible reduction in geometric detail due to simplification of the BVH structure"? Or do we think "wow, now I can see the pedestrians and the cars and what looks like a living city"?
 
Is the need for SSR due to the relatively large steps used in ray marching or something more fundamental limitation of RT?
It's a result of rasterization performance optimisations. RT can't see things which are "faked" - done with transparent textures or in screen space. Thus SSR help since it's running in screen space and see all these "fakes". It is a matter of performance for the most part.
 
It's a result of rasterization performance optimisations. RT can't see things which are "faked" - done with transparent textures or in screen space. Thus SSR help since it's running in screen space and see all these "fakes". It is a matter of performance for the most part.

Oh yes, that's right. I don't see that changing anytime soon.
 
Looking at just minecraft RT.....




Most of these minecraft RT videos go in the high range of views, millions in many cases, not to forget the comments on how blown away people are. The general public is impressed, to say the least.
And thats just minecraft. CP2077 has shown the use of five different RT methods going on, all impressively done on todays hardware.

RT is the real gamechanger, along with AI/ML deep learning reconstruction to gain enormous performance boosts with no IQ loss, even better IQ in some cases. RTX IO/direct storage flies along nicely though.
 
He set the bar at "realism". Realism implies dynamic lighting, since you know that's how the real world works. Of course it's possible to make static scenes look awesome with lightmaps. But it's up for debate whether static anything in a game should be considered great looking by today's standards.

Would you not say The Last of Us II is realistic?
 
Would you not say The Last of Us II is realistic?
Every single scene in that game is meticulously hand-authored. I believe most of the lighting is baked-in. So this is exactly what @trinibwoy was talking about, i.e., yes you can bake lightmaps on everything and make it look great but you lose dynamism. And so you can't have dynamic time-of-day. Furthermore, even with a fixed time of day you can't bake lighting on unpredictably-moving objects, so you need to resort to probes which have their own caveats (light leaks). So yes, despite all the effort from an AAAA studio (if there ever was one) the game still has inconsistent lighting on dynamic objects.

Contrast this with Metro EE (from a much smaller and lower-funded studio) which basically discarded all the fakery and replaced them with RT sources and you end up with much more consistent-looking lighting throughout the game. They just don't have ND's art budget. RT doesn't remove the need for a lighting director, it just removes the need for meticulous hand-placement of fake lighting and careful use of fragile heuristics to realize the director's vision.
 
The disparaging of progress towards real-time RT is truly astonishing. And these "all-or-nothing", "it's not as performant as the traditional path so it shouldn't even exist" arguments are just garbage. Yes, RT is expensive and needs hardware that's different from the way we've been building GPUs. It needs painful changes to lighting engines, especially during these intermediate years when we'll have to delicately navigate the balance between simulation and fakery. It hurts Nvidia, AMD, Intel, Epic, Microsoft, Sony. And yet, they have all decided to move in this direction. Why? Because the experts running the show aren't luddites who would be excited getting out of bed at the prospect of building the next thing that will just do the same old, same old but 35% faster.

Are the current hardware architectures the be-all-end-all solution? Of course not. They will evolve. But we've taken the first steps and there's no looking back despite the frothing hysteria from those that are offended at the perf cost. Just turn the damn feature off -- while you still can.
 
And these "all-or-nothing", "it's not as performant as the traditional path so it shouldn't even exist" arguments are just garbage.
I don't see customer feedback telling me the feature is not yet convincing as 'garbage'. Instead depicting them as hysteric fools and ignoring them, we better try to improve RT so they are happy with it's win to cost ratio. Claiming self appointed expertise won't convince anybody.
Telling them to turn RT off is no solution but just ignorant, and pointing towards evolving hardware is just committing incompetence. Sorry i'm not impressed from your arguments, note i'm assuming developers would communicate it somehow this way which would be very bad.

So, beside my critique on API limitations, there also is the question on how to use RT to get most benefit.
To me, it diverges in 3 groups:
* Full RT lighting, all dynamic, GI + reflections + shadows: Exodus is the only example (ignoring Minecraft or Quake). IMO that's impressive, something new and worth the high perf. cost.
* Many effects: Control, CP 2077. GI still mostly static. IMO looks better, but not really enough to be worth the cost. I'm totally not convinced and would indeed turn off RT after checking it out.
* Single effect: Eternal or CoD. Just reflections or soft shadows. Hit on perf. is small. IMO that's good and i would happily enjoy RT in those titles.
Then we've had some mediocre titles like Godfall or RE8. I only mention them to say i do not count them to the third category. Guess we all agree that's not what we want.

To me it's interesting i have this 'valley of disappointment' in the middle. And this second category is also most commonly used. Thus my overall feeling about RT in games so far is disappointment, although it would not have to be. (Really meaning just the visual results, not the API stuff).
I guess my opinion is a minority here? Reaction on Control / CP was overall very positive, and i can relate. But after looking back it feels like: 'Bolt on too much, not really getting the best from both worlds.'
 
I don't see customer feedback telling me the feature is not yet convincing as 'garbage'. Instead depicting them as hysteric fools and ignoring them, we better try to improve RT so they are happy with it's win to cost ratio.

Of course RT will improve, just like every other aspect of 3D rendering has improved with time. That’s not the issue.

For those who think RT isn’t worth the performance hit they should simply not use it. For those who think current lighting methods are good enough they can turn it off. That’s why options are great. Literally nobody has had RT forced on them in any game so far.

There does seem to be a bit of hysteria around the limitations of the first iteration of an advanced feature that can be turned off if needed. Which I find ironic because I think we should be celebrating the impressive adoption rate of a long desired capability that fundamentally improves the rendering pipeline and artist workflows. It’s impressive because we’ve had RT hardware on the market for less than 3 years which is nothing in terms of game development timeframes. With consoles now in the mix we could be in for a treat this generation. So yeah I don’t understand the Debbie Downers either.
 
What's numbing is that GPU compute has only advanced by a factor of about 10 over the past 10 years:

AMD Radeon HD 6970 Specs | TechPowerUp GPU Database
2.7 TFLOPS FP32

AMD Radeon RX 6900 XT Specs | TechPowerUp GPU Database
23 TFLOPS FP32

Don't look at bandwidth growth or you'll cry.

I believe Ampere represents the easy part of the curve in improving ray traced performance. There's nearly no growth left in terms of BVH storage efficiency (unless NVidia hasn't implemented the compression techniques first described years ago) and VRAM bandwidth is going nowhere fast, too. Compute is looking like it has a miserable growth curve from here.

So, welcome to the next ten years of kludges to make RT work better than it currently does.

I wonder how much of Nanite was possible 10 years ago...
 
I don't see customer feedback telling me the feature is not yet convincing as 'garbage'. Instead depicting them as hysteric fools and ignoring them, we better try to improve RT so they are happy with it's win to cost ratio. Claiming self appointed expertise won't convince anybody.
Telling them to turn RT off is no solution but just ignorant, and pointing towards evolving hardware is just committing incompetence. Sorry i'm not impressed from your arguments, note i'm assuming developers would communicate it somehow this way which would be very bad.

So, beside my critique on API limitations, there also is the question on how to use RT to get most benefit.
To me, it diverges in 3 groups:
* Full RT lighting, all dynamic, GI + reflections + shadows: Exodus is the only example (ignoring Minecraft or Quake). IMO that's impressive, something new and worth the high perf. cost.
* Many effects: Control, CP 2077. GI still mostly static. IMO looks better, but not really enough to be worth the cost. I'm totally not convinced and would indeed turn off RT after checking it out.
* Single effect: Eternal or CoD. Just reflections or soft shadows. Hit on perf. is small. IMO that's good and i would happily enjoy RT in those titles.
Then we've had some mediocre titles like Godfall or RE8. I only mention them to say i do not count them to the third category. Guess we all agree that's not what we want.

To me it's interesting i have this 'valley of disappointment' in the middle. And this second category is also most commonly used. Thus my overall feeling about RT in games so far is disappointment, although it would not have to be. (Really meaning just the visual results, not the API stuff).
I guess my opinion is a minority here? Reaction on Control / CP was overall very positive, and i can relate. But after looking back it feels like: 'Bolt on too much, not really getting the best from both worlds.'

God you must hate the new consoles, the huge performance impacts RT has there and the minimal visual return of it, holy crap. Im glad the most of the population appreciates ray tracing lol.

Compute is looking like it has a miserable growth curve from here.

Were at 36TF's worth of raw BW now, with AMD's next going much higher than that. Bandwith aint a problem anywhere, we have SSD's now.
 
I think a lot of GPUs during that time lacked 64-bit atomics because it's needed for applying depth testing to Nanite geometry ...
I'm sure there are more aspects of Nanite's specific algorithm that weren't possible then. That's why I qualified my question with "how much of" ;)

Direct Compute was relatively primitive back then and geometry was stuck in tessellation wars along with geometry shader being a disaster ("just don't use it").

So, much hackery required.
 
Back
Top