GPU Ray Tracing Performance Comparisons [2021-2022]

That's some nice info.
But it does not say much, because traversal is missing. And we don't know what 'new ray box sorting and traversal' actually means.
We saw some leaked new instruction op codes, but it's not clear to me what they might do either. (Not sure if those are actual instructions at all, or some configuration stuff)

So we have to wait for ISA docs, and synthetic benchmarks to isolate pure RT perf. from other games stuff will be nice.
If they have HW traversal, perf. should be somewhat close to Turing. At the estimated PCGH benches we see XTX beats 2080Ti in RT games by 50%-100%. It's at 3080 level even.

That's not as bad as my quick impression from the pressentation.
Maybe i was wrong with assuming they still have no full HW traversal?
But if they have, why did they not drop a few seconds at the presentation to make this clear?
And, if traversal is fully HW, why should there be still a need to return sorted boxes?

Conclusion: In the future i shall ignore all those marketing announcements and wait until proper infos appear...
 
What's a "node"? Tomorrow's node increments probably won't be equivalent to today's, just as todays aren't equivalent to those in the past. There'll be 4 more "nodes" unless TSMC and Intel decide to pivot to making jellybeans, but who is to say what they'll cost, when they'll arrive, or what they'll deliver.
 
What's a "node"? Tomorrow's node increments probably won't be equivalent to today's, just as todays aren't equivalent to those in the past. There'll be 4 more "nodes" unless TSMC and Intel decide to pivot to making jellybeans, but who is to say what they'll cost, when they'll arrive, or what they'll deliver.
They will deliver.. gaa transistors have better cache scaling and io.. plus jensen already said he.ll use intel's nodes in the future
 
If AMD can get the Fluid Motion thing in motion quickly, FSR3 could make their RT performance look quite decent.


While it would not match up to the corresponding 40xx series card, it'd still be much faster than Ampere and RT will be quite playable at 4k barring the most extreme implementations.

Unless of course, it runs on all cards, or nvidia magically find an optimization that allows DLSS3 to be run on Ampere cards too.
 
Well, they're going to do it one way or the other.

Whether it's the hardware they used in earlier gens, or it's another FSR2-like solution which "every developer could have done it already because these APIs are open", FSR3 would help assuage their RT performance woes.
 
Why does it need to be 4 nodes?
We are at 5nm, so naively expecting there to be a 4,3,2,1 node left.
Not sure what comes after that. But each node shrink will not double the amount of transistors per square mm. My assumption is that So for a XX60 to have the same power as a 4090 of today you need to do more than double the transistors. So you’re looking at 2nm or 1nm to make it happen.
 
We are at 5nm, so naively expecting there to be a 4,3,2,1 node left.
Not sure what comes after that. But each node shrink will not double the amount of transistors per square mm. My assumption is that So for a XX60 to have the same power as a 4090 of today you need to do more than double the transistors. So you’re looking at 2nm or 1nm to make it happen.
Im not up to date but usualy node name has little to do with real physics measurement and usualy is just marketing (tough smaller is better if we talking about same manufacture).
 
Im not up to date but usualy node name has little to do with real physics measurement and usualy is just marketing (tough smaller is better if we talking about same manufacture).
is it? yea I think I recall that everyone's node sizes aren't equal. But I don't think the discrepancy should be that large.
 
is it? yea I think I recall that everyone's node sizes aren't equal. But I don't think the discrepancy should be that large.
The term "5 nanometer" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a 5 nm node is expected to have a contacted gate pitch of 51 nanometers and a tightest metal pitch of 30 nanometers.[3] However, in real world commercial practice, "5 nm" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption compared to the previous 7 nm process.[4][5]
 
We are at 5nm, so naively expecting there to be a 4,3,2,1 node left.
Not sure what comes after that. But each node shrink will not double the amount of transistors per square mm. My assumption is that So for a XX60 to have the same power as a 4090 of today you need to do more than double the transistors. So you’re looking at 2nm or 1nm to make it happen.
Ångströms. But N5x and N4x or "5nm" and "4nm" are same node for TSMC, just like N7x/N6 are same node. N3x will be new node though.
 
We are at 5nm, so naively expecting there to be a 4,3,2,1 node left.
Not sure what comes after that.
Skipping one node, then we land at -1.
This will give us perfect mirror reflections for free,
and much more important: First time ever marketing tech products with negative numbers.
 
Skipping one node, then we land at -1.
This will give us perfect mirror reflections for free,
and much more important: First time ever marketing tech products with negative numbers.
Intel already went to Ångström naming on their future processes starting with 20A
 
I find that to be doubtful when no recent displays have implemented the technology ...
Incorrect, many have, and many still do. The truly best HDR displays are still branded G-Sync Ultimate. If you go on rtings.com, the displays with the least VRR problems are the ones branded G-Sync and G-Sync Ultimate.

You're conflating the concept of quality with standards ...
And you are obfuscating the situation on purpose. Through different tiers, G-Sync effectively became both an open standard and a closed quality standard.

-G-Sync unsupported will work but with problems on any trash VRR displays that has bad VRR implementations, those are the majority of displays by the way.
-G-Sync Compatible picks the best of those common VRR displays and flags them for the user that they can support a VRR experience that is relatively stable (with minor flaws)
-G-Sync and G-Sync Ultimate are the best VRR displays for SDR/HDR.

I wonder if Epic Games will continue developing/maintaining HW RT especially if their biggest customers (AAA game developers) don't ship the feature on consoles ...
They are are still developing it, so obviously there is still demand for it. Other engines are expanding HW-RT support, Epic wouldn't want to fall behind on features.
There actually was gameplay in Valley of the Ancients where you can engage in a small combat sequence against an ancient itself and shoot objects as well.
Very limited in comparison to the Matrix demo, which is the closest to an actual open world game. I don't know how you can still argue this point.
 
Incorrect, many have, and many still do. The truly best HDR displays are still branded G-Sync Ultimate. If you go on rtings.com, the displays with the least VRR problems are the ones branded G-Sync and G-Sync Ultimate.
Without any specific citation the best HDR displays according to your source don't even implement any G-sync technology as evidenced by the fact that they're fully functionally compatible with other hardware vendors despite being branded with G-Sync "Ultimate" which I imagine was supposed to be reserved for actual implementations containing the proprietary modules which only work on Nvidia graphics ...

The best HDR displays are the best because of their quality but it's absolutely no thanks to Nvidia rubber stamping that fact because the manufacturers didn't use their technology ...
And you are obfuscating the situation on purpose. Through different tiers, G-Sync effectively became both an open standard and a closed quality standard.

-G-Sync unsupported will work but with problems on any trash VRR displays that has bad VRR implementations, those are the majority of displays by the way.
-G-Sync Compatible picks the best of those common VRR displays and flags them for the user that they can support a VRR experience that is relatively stable (with minor flaws)
-G-Sync and G-Sync Ultimate are the best VRR displays for SDR/HDR.
G-Sync is an open brand with closed quality control certification and a dead end technology ...

Brand and quality control is irrelevant to the technology at hand. Both their brand and their certification could be dead right now and it would make no appreciable difference in the market and user experience ...
They are are still developing it, so obviously there is still demand for it. Other engines are expanding HW-RT support, Epic wouldn't want to fall behind on features.
Hardware tessellation is deprecated in Unreal Engine so maybe there's hope afterall that Epic Games will kill off HW RT too. The next competitor to UE is Unity which isn't upto par in terms of networking capabilities/features, doesn't have a GPU-driven renderer like Nanite, no one uses HDRP (requirement for HW RT), and they have yet to ship ECS as well ...

What are developers going to do ? Spend 3+ years or more developing an in-house engine before they're able to officially start the project ?
Very limited in comparison to the Matrix demo, which is the closest to an actual open world game. I don't know how you can still argue this point.
The most you can do in the Matrix City sample is drive around inside a vehicle with a toggle to control the day of time or NPC density. The NPCs have no detailed physics collisions either besides a spontaneous disappearing animation upon impact ...

Valley of the Ancient is a much more technically demanding demo as well since consoles ran at a lower internal resolution in comparison to the Matrix City sample. When you compare their nanite visualizations you can clearly see that the backgrounds in the Valley of the Ancient demo are more geometrically dense ...

I imagine it'll be near impossible to use HW RT with tons of Nanite rendered foliage and get acceptable framerates even on the most powerful hardware available especially in the case of animated foliage ...
 
People may not want to pay for the gsync module, but the idea that gsync is a dead end tech is just not true. I'm perfectly happy with my gsync compatible display, but there are very good displays with the gsync module. There is always space for premium products in the market.

 
Back
Top