AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

It's worse than that. Nvidia code only runs well on their latest GPU. Their own previous GPUs run poorly too. Just see Pascal in the majority of titles since Turing released.

If you are familiar with GPU architectures you know that has nothing to do with Nvidia forcefully reducing the performance of Pascal GPUs. Turing is a much more modern architecture that has support for FP/INT operations simultanously, HW-accelerated Raytracing, DX12 Ultimate support, INT8/4 acceleration and DirectStorage, games slowly tap into the potential of these modern GPUs as we progress into the generation. So expect that gap between Turing and Pascal widen far more in the future.

Pascal is just an outdated architecture from 2016. Likewise, the same applies to RDNA1 which doesn't have that featureset as well. I expect that even a regular RTX 2060 will outperform the 5700XT and 1080Ti by a large margin both in performance and visual quality in next generation games in around 2-3 years when the full set of DX12 Ultimate is used and Raytracing becomes the standard. That is, if games even boot up on the 5700XT anymore, I'm a little skeptical considering AMD still has not enabled DXR for it yet.

I also have high hopes for RDNA2, I think it will age gracefully. Sure, RT performance is lower but I'm pretty certain future games will implement more modest Raytracing at lower quality settings to help with performance, as we seen in the console version of Watch Dogs Legion. The consoles will help a ton here.
 
Well Valhalla clearly doesn't run as good as it could be on Nvidia hardware, the 5700XT reaching 2080Ti performance is very suspicious and Dirt5 as well as Godfall underperform as well. I have never seen a 5700XT perform under a regular 2060 (without DX12U features, DLSS and RT of course) in Nvidia titles, which would be equivalent of this behaviour for AMD on the Nvidia optimized side...

https://www.guru3d.com/articles_pag..._graphics_performance_benchmark_review,6.html

https://www.guru3d.com/articles_pag..._graphics_performance_benchmark_review,7.html

From 1440p 2080 Ti is above 5700 XT and 10 frame above at 4k. All the games are optimized for last generation and current gen consoles maybe it help GCN, RDNA and RDNA 2 GPUs.

5700 XT is behind 2080 Super at 1440p and behind 2070 Super at 4k.

If you are familiar with GPU architectures you know that has nothing to do with Nvidia forcefully reducing the performance of Pascal GPUs. Turing is a much more modern architecture that has support for FP/INT operations simultanously, HW-accelerated Raytracing, DX12 Ultimate support, INT8/4 acceleration and DirectStorage, games slowly tap into the potential of these modern GPUs as we progress into the generation. So expect that gap between Turing and Pascal widen far more in the future.

Pascal is just an outdated architecture from 2016. Likewise, the same applies to RDNA1 which doesn't have that featureset as well. I expect that even a regular RTX 2060 will outperform the 5700XT and 1080Ti by a large margin both in performance and visual quality in next generation games in around 2-3 years when the full set of DX12 Ultimate is used and Raytracing becomes the standard. That is, if games even boot up on the 5700XT anymore, I'm a little skeptical considering AMD still has not enabled DXR for it yet.

I also have high hopes for RDNA2, I think it will age gracefully. Sure, RT performance is lower but I'm pretty certain future games will implement more modest Raytracing at lower quality settings to help with performance, as we seen in the console version of Watch Dogs Legion. The consoles will help a ton here.

RT being standard is not sure. For example Unreal Engine 5 take another road for the moment. It will depend of the game engine choice.
 
https://www.guru3d.com/articles_pag..._graphics_performance_benchmark_review,6.html

https://www.guru3d.com/articles_pag..._graphics_performance_benchmark_review,7.html

From 1440p 2080 Ti is above 5700 XT and 10 frame above at 4k. All the games are optimized for last generation and current gen consoles maybe it help GCN, RDNA and RDNA 2 GPUs.

5700 XT is behind 2080 Super at 1440p and behind 2070 Super at 4k.



RT being standard is not sure. For example Unreal Engine 5 take another road for the moment. It will depend of the game engine choice.
That is strange, here are the benchmarks I was referring to: https://www.pcgameshardware.de/Assa...alhalla-Benchmarks-Tuning-Tipps-1361270/2/#a1 They tested in Fornberg, which is extremly demanding and much more so than the integrated benchmark which was probably used by Guru3D. That could explain it.

I am sure Epic is working to utilize hw accelerated, triangle Raytracing for Lumen. They even said they are working on optimizing Lumen to run 1440p60 instead of 1440p30 on PS5 which could be achieved by using HW-acceleration. It would be a huge waste of resources if the newest lighting system in the industry leading engine is not taking advantage of the fixed function hardware in modern GPUs.
 
If you are familiar with GPU architectures you know that has nothing to do with Nvidia forcefully reducing the performance of Pascal GPUs. Turing is a much more modern architecture that has support for FP/INT operations simultanously, HW-accelerated Raytracing, DX12 Ultimate support, INT8/4 acceleration and DirectStorage, games slowly tap into the potential of these modern GPUs as we progress into the generation. So expect that gap between Turing and Pascal widen far more in the future.

Pascal is just an outdated architecture from 2016. Likewise, the same applies to RDNA1 which doesn't have that featureset as well. I expect that even a regular RTX 2060 will outperform the 5700XT and 1080Ti by a large margin both in performance and visual quality in next generation games in around 2-3 years when the full set of DX12 Ultimate is used and Raytracing becomes the standard. That is, if games even boot up on the 5700XT anymore, I'm a little skeptical considering AMD still has not enabled DXR for it yet.

I also have high hopes for RDNA2, I think it will age gracefully. Sure, RT performance is lower but I'm pretty certain future games will implement more modest Raytracing at lower quality settings to help with performance, as we seen in the console version of Watch Dogs Legion. The consoles will help a ton here.
DX12 Ultimate has nothing to do with it. Outside of a few games using RT there is no usage of it. Pascal has a more modern feature set than any AMD GPU prior to the recently released 6 series and I don't see any issues for those GPUs. No AMD GPU has simultaneous FP/INT operations.
 
DX12 Ultimate has nothing to do with it. Outside of a few games using RT there is no usage of it. Pascal has a more modern feature set than any AMD GPU prior to the recently released 6 series and I don't see any issues for those GPUs. No AMD GPU has simultaneous FP/INT operations.

That's what I said, "expect that gap to wide more in the future, once more hardware features get used"

Even without any DX12U features or RT, Turing is still ahead in a game like Control because it uses coherrent INT/FP operations heavily, not because Nvidia wants to butcher Pascals performance.

By the way, modern AMD GCN GPUs are far better at Async Compute than Pascal, and it shows in games like Red Dead 2.
 
Not the triangle scene but SDF and voxel repretentation of part of the scene. This is what they use because it is faster.
Wasn't there a post by sebbi not so long ago where he was looking at using RT h/w for voxel tracing?
There aren't a lot of ways to make dynamic lighting work and almost all of them involve some form of ray tracing.
The fact that UE5 doesn't use RT h/w right now doesn't mean that it's "taking another road". It means that as a massively multiplatform engine it can't be built on a premise of RT h/w being available everywhere and must have some form of a fallback at its base.
 
It would be very strange if UE5 would not use RT at any time going forward. Even UE4 does? Also, imagine the RT functionality of the PS5 going unused, the little thats there is going to be used ;)
 
Wasn't there a post by sebbi not so long ago where he was looking at using RT h/w for voxel tracing?
There aren't a lot of ways to make dynamic lighting work and almost all of them involve some form of ray tracing.
The fact that UE5 doesn't use RT h/w right now doesn't mean that it's "taking another road". It means that as a massively multiplatform engine it can't be built on a premise of RT h/w being available everywhere and must have some form of a fallback at its base.

They can use HW acceleterated tracing for other scene representation, it can help but I doubt they will use triangle scene raytracing. It is interesting if it goes faster than the current Lumen implementation.



It would be very strange if UE5 would not use RT at any time going forward. Even UE4 does? Also, imagine the RT functionality of the PS5 going unused, the little thats there is going to be used ;)

Lumen uses some form of raytracing but not triangles based raytracing. From the last sebbbi tweet, it seems it can be hardware accelerated too and it can help in games like Teardown and Dreams or his own prototype.
 
Interesting, it is possible to accelerate voxel RT by using DXR intersection shaders? So only the intersection part of the Raytracing can be hw-accelerated in Lumen?

If that's the case, we could see the performance difference in Raytracing for AMD and Nvidia to shrink in games using Lumen, because then Nvidia loses the advantage they have with accelerating BVH traversal instead of doing it in software like AMD. Then that part would not be needed anymore and I'd assume both architectures should be pretty comparable in RT performance.
 
If you are familiar with GPU architectures you know that has nothing to do with Nvidia forcefully reducing the performance of Pascal GPUs. Turing is a much more modern architecture that has support for FP/INT operations simultanously, HW-accelerated Raytracing, DX12 Ultimate support, INT8/4 acceleration and DirectStorage, games slowly tap into the potential of these modern GPUs as we progress into the generation. So expect that gap between Turing and Pascal widen far more in the future.

Pascal is just an outdated architecture from 2016. Likewise, the same applies to RDNA1 which doesn't have that featureset as well. I expect that even a regular RTX 2060 will outperform the 5700XT and 1080Ti by a large margin both in performance and visual quality in next generation games in around 2-3 years when the full set of DX12 Ultimate is used and Raytracing becomes the standard. That is, if games even boot up on the 5700XT anymore, I'm a little skeptical considering AMD still has not enabled DXR for it yet.

I also have high hopes for RDNA2, I think it will age gracefully. Sure, RT performance is lower but I'm pretty certain future games will implement more modest Raytracing at lower quality settings to help with performance, as we seen in the console version of Watch Dogs Legion. The consoles will help a ton here.
Considering the most popular cards are still the GTX 1060 and the GTX 1050Ti, I doubt we'll be seeing any DX12U feature be mandatory to run games in the coming 5 years or so. Multiple things arrived before RT, VRS and other DX12U that are still not widely adopted, including 4K.
I still think the 5700XT will age better than the RTX 2060(S) (and the RTX 2070S for that matter), simply because despite having additional features, they are too slow to really take advantage of it. The main feature that is an exception to this is VRS.
 
Of course. These slow features like DLSS and mesh shaders and RT h/w will definitely make 5700 age better, especially in titles with ray tracing, as can be witnessed in Crysis Remastered, for example.
Speaking of which, Crytek might have rushed the launch a bit too much.
Crysis Remastered 1.0 > 1.3 update:
RX 5700 XT 1080p +70%, 1440p +66%, 4K +42%
GTX 1080 Ti 1080p +50%, 1440p +62%, 4K +42%
RTX 2080 Ti 1080p +15%, 1440p +22%, 4K +17%

https://www.overclock3d.net/reviews...patch_1_3_0_delivers_huge_performance_gains/4
 
Nice gains on some, but why so little increase on the 2080TI?
 
Of course. These slow features like DLSS and mesh shaders and RT h/w will definitely make 5700 age better, especially in titles with ray tracing, as can be witnessed in Crysis Remastered, for example.
I didn't count DLSS, because AMD is working on its own equivalent, which I assume is not going to take more than a year to implement.
You are correct about Mesh Shaders being able to give a performance boost, but it still not likely be mandatory still. And it's questionable whether mesh shaders alone will really give an RTX 2060(S) card the boost it needs to match or surpass a 5700(XT) in games.

As for RT, well, despite being hardware accelerated on the 2060Ti, the performance drop on it is still too big. Even with DLSS you barely get framerates that are considered playable.

More features doesn't necessarily mean a card ages better. It certainly can, but there comes a point where a feature hampers its performance too much anyway, which I think will happen with RT and the 2060 cards. The 5700 cards being naturally faster and having performance that stands out in some recent games, makes me believe that even missing some features, the performance will be acceptable for longer than the RTX 2060 cards. And inevitably, the 5700 cards will see some benefit from the consoles as well, although obviously the 6000 series cards even more so.

I can be wrong. But we will see.
 
Speaking of which, Crytek might have rushed the launch a bit too much.
Crysis Remastered 1.0 > 1.3 update:
RX 5700 XT 1080p +70%, 1440p +66%, 4K +42%
GTX 1080 Ti 1080p +50%, 1440p +62%, 4K +42%
RTX 2080 Ti 1080p +15%, 1440p +22%, 4K +17%

https://www.overclock3d.net/reviews...patch_1_3_0_delivers_huge_performance_gains/4
I didn't find anything about it, so: Is this optimization or is it lowered details under the exact same preset names as for example Flight Simulator 2020 did a couple of weeks ago? Genuinely interested, but cannot be bothered to buy Crysis yet again, let alone something that's been slapped together as carelessly as this.

Maybe Page 2 gives some hints. Performance with Raytracing disabled seems to improve the least. Shame they did not do this page's benchmarks for the 5700 or 1080 Ti, but i recognize that'd be a lot of extra work.
 
Last edited:
Speaking of which, Crytek might have rushed the launch a bit too much.
Crysis Remastered 1.0 > 1.3 update:
RX 5700 XT 1080p +70%, 1440p +66%, 4K +42%
GTX 1080 Ti 1080p +50%, 1440p +62%, 4K +42%
RTX 2080 Ti 1080p +15%, 1440p +22%, 4K +17%
Those are purely CPU limited scenes, where most GPUs were underutilized to begin with.
as for example Flight Simulator 2020 did a couple of weeks ago?
What did they do?
 
Back
Top