AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

There's not much worth of discussion within the game's performance results without RT. (I wonder if bringing up RT-less results that no one was questioning is just another means to skew the narrative.)
I really was trying to understand. My non RT was trying to give the concerns the benefit of the doubt. (maybe I should've gone back a couple pages to catch up)
Now if you turn RT all the way up then the game becomes an unusable mess in terms of performance with questionable IQ enhancements, unless you have recently-released a top-end card from only one IHV (working together with an upsampling technique that is exclusive to that same IHV).
I'm actually amazed that there would be any question at this moment in time regarding how RDNA2 is going to compare against Ampere when RT is enabled. Hence why I thought people was being crazy to even compare it in that manner.

I'm sorry, but if you think that supporting what was the only RT enabled card available during development, then it turns out to be the more performant cards with the addition of DLSS2 on top. I don't know what to say to you.
Maybe they should've hobbled their game, or only enable RT when RDNA3 is available?

As for questionable benefit at the highest setting, isn't that pretty much every PC game? This obviously goes further than CDPR.
 
I'm actually glad that CDPR pushed the boat far out especially if this game is intended to last a long time.

I'm super happy about cdpr pushing envelope as well. This is what pc gaming is supposed to be, push to the max even to the point future hw is needed for max settings. It's ok. to play with non maxed out settings.

Yes we needed this. They push cpus hard too.
 
It's tough to run this game with ray-tracing on and higher resolution without dlss. 3070fe dips below 30fps on ultra @1440p even in the beginning which is not as heavy as the city. But you don't need to run ultra or psycho or 1440p/4k necessarily. It's great to have options. 6800xt is likely going to be somewhere around or slower than 3070fe when using ray tracing this heavily. It's going to push memory bandwidth hard and also 6800xt sharing tmu's and compute between shading/ray tracing is not going to help. Dedicated hw is good if there is memory bandwidth to run shading, rasterizing and ray tracing parallel. This is similar to async compute versus no async compute.
 
I really was trying to understand. My non RT was trying to give the concerns the benefit of the doubt. (maybe I should've gone back a couple pages to catch up)
There's nothing to discuss about non RT performance IMO.
nVidia has an edge mostly because they've most probably had access to earlier code to optimize their GPU drivers. It doesn't mean that AMD can't catch up, especially considering the cards within the same price bracket are already within some ~10% of each other.
We do know CDPR didn't even have any Navi 21 card up until after the cards released, which is when their QA team finally put the RX6800XT in their system requirements.



I'm actually amazed that there would be any question at this moment in time regarding how RDNA2 is going to compare against Ampere when RT is enabled.
Who questioned that Ampere has better RT performance than RDNA2? And where?



I'm sorry, but if you think that supporting what was the only RT enabled card available during development, then it turns out to be the more performant cards with the addition of DLSS2 on top.
The only RT enabled cards available during development were the Turing cards. Which are running like crap when all RT effects are turned on. I made that exact point in my previous post.



Not at all. Just saying that fear mongering without evidence of nefarious activity by CDPR is uncalled for.
Define "nefarious". I don't think CDPR puting a bunch of unoptimised brute-forced RT effects into the game by nvidia's request is nefarious. Just like using Hairworks on Witcher 3 with subpixel triangles that brought Kepler and GCN2 cards to their knees.
OTOH, nvidia making those requests...
 
The only RT enabled cards available during development were the Turing cards. Which are running like crap when all RT effects are turned on. I made that exact point in my previous post.
Probably run fine at lower resolution or settings.

The point is you add features and settings that go beyond what is currently available.
As it turns out there are some cards available that can make use of them already. Maybe they should've pushed harder in my eyes.
But I'm a bit of a radical in that way, I'm from the can it run Crysis school of thought for PC. As long as there's reasonable settings for a wide range of configurations.
 
Define "nefarious". I don't think CDPR puting a bunch of unoptimised brute-forced RT effects into the game by nvidia's request is nefarious. Just like using Hairworks on Witcher 3 with subpixel triangles that brought Kepler and GCN2 cards to their knees.
OTOH, nvidia making those requests...

Nefarious would be an implementation that unnecessarily and intentionally harms RDNA performance.

Are you saying that Cyberpunk has a bunch of unoptimized brute force RT effects requested by Nvidia to harm AMD? Since we all agree conspiracy theories are stupid I'm sure you have information to support that position.

Probably run fine at lower resolution or settings.

Turing seems to be doing fine for a 2 yr old arch based on Tom's numbers.
 
Last edited:
There's nothing to discuss about non RT performance IMO.
nVidia has an edge mostly because they've most probably had access to earlier code to optimize their GPU drivers.

There is big difference in how rt works on amd and nvidia hw. In amd hw rt uses compute(traversal) and tmu's(memory accesses) to do ray tracing. In nvidia hw there is dedicated units. This allows nvidia hw to do better parallel raster, compute and ray trace. This is similar to async compute which gave amd advantage in past but now nvidia has better async computing when using ray tracing. All this of course is memory bandwidth dependent where it looks like nvidia has advantage especially with divergent rays.
 
Last edited:
It's tough to run this game with ray-tracing on and higher resolution without dlss. 3070fe dips below 30fps on ultra @1440p even in the beginning which is not as heavy as the city. But you don't need to run ultra or psycho or 1440p/4k necessarily. It's great to have options. 6800xt is likely going to be somewhere around or slower than 3070fe when using ray tracing this heavily. It's going to push memory bandwidth hard and also 6800xt sharing tmu's and compute between shading/ray tracing is not going to help. Dedicated hw is good if there is memory bandwidth to run shading, rasterizing and ray tracing parallel. This is similar to async compute versus no async compute.

This is the firat taste of next gen pushing on all fronts, cpu, ray tracing, ssd streaming, raw bandwith, and raw gpu raster power. Dolby atmos audio.
Its no wonder that when you do all of that at once, in a big open world with lots of atuff going on chasing framerates and resolutions.... ye things are going to suffer the longer down the range of hw you go.

Good to see its running/optimized for birg nv and amd at the least.

A ’crysis’ was much needed imo, push everything, instead of forego RT or skip big open areas etc, its what people envisioned, RT in big open worlds with infinite detail in a next gen flavour.
Anyone noticed the particle effects too? These really add to the inmersion.
 
I'm really excited to see how much cdpr can optimize their implementation in future. Now that the tech and game is here rest is polishing it up and optimizing. If they were really crunching the whole year perhaps some work was postponed after launch in effort to ship the game. They could have ideas on how to make things a lot better but didn't have time to refactor or even change implementation drastically. It must be som kind of learning experience, first try rarely is as good as it can get.
 
There is big difference in how rt works on amd and nvidia hw. In amd hw rt uses compute(traversal) and tmu's(memor accesses) to do ray tracing. In nvidia hw there is dedicated units. This allows nvidia hw to do better parallel raster, compute and ray trace. This is similar to async compute which gave amd advantage in past but now nvidia has better async computing when using ray tracing. All this of course is memory bandwidth dependent where it looks like nvidia has advantage especially with divergent rays.
I'm not questioning whether or not Ampere has better RT performance than RDNA2. The sentence you quoted referred to non-RT performance.
 
Not at all. Just saying that fear mongering without evidence of nefarious activity by CDPR is uncalled for.
Nefarious would be an implementation that unnecessarily and intentionally harms RDNA performance.

Are you saying that Cyberpunk has a bunch of unoptimized brute force RT effects requested by Nvidia to harm AMD? Since we all agree conspiracy theories are stupid I'm sure you have information to support that position.
These comments are really ironic, considering CDPR willingly collaborated with nVidia to use tessellation x64 for HairWorks in The Witcher 3, which by the way literally looks no different than tessellation x8/x16 at a much lower performance cost on both nVidia and AMD cards, right at the time when nVidia's cards happened to have an edge in tessellation compared to AMD...

But right... "Fear mongering". "Conspiracy theorists".

I am very well aware that just because it happened in the past it does not necessarily mean that it is in fact happening again. But again, to use those labels to just dismiss it as some crazy out there impossible scenario is just shallow. We should all be on the look out for these things, unless, you know, you want another 10 years of no competition where a deliberately questionable implementation would leave no competition in the market and even more hijacked GPU prices.

You have no counter argument other than trying to shame people with spooky words.
No wonder the world is turning into a tyranny, and you're gonna deserve it when it does.
 
RT is officially supported in WDL on AMD RT capable GPUs now: https://forums.ubisoft.com/showthread.php/2302093-Title-Update-2-40-Patch-Notes

Looking forward to seeing the PC benchmarks. Going by the closest one could hit with consoles, AMD GPUs there were at around a 15% deficit to Turing GPUs in otherwise "teraflop to teraflop" comparison (PS5 hit around the same as a 2060 super, which is around 15% slower than a 2070 super in many cases at similar res). Not great but not deal breaking for many. So it'll be interesting to see the difference on PC.
 
What I can say now from my first 10 minutes as a „corpo“ is that Screen Space Reflections Quality on „psycho“ instead of ultra really kills performance on a 6900XT. Walking through the turtorial with full view of the corpo's (very!!!) shiny (!) office, fps go down from 46 fo 32 in 4k with everything else maxed.
 
Back
Top