AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

CDPR didn't say anything other than CP2077 won't work on AMD cards for RT yet, and that support for AMD will come soon. RT works in other DXR titles so that means CDPR is locking to Nvidia only, not that support for RT isn't available in drivers.

It's not working (or weirdly) in the lastest Watch Dog, yes ? And perfs are tanking in others. It smells bad drivers to me. In any case, time will tell, and help.
 
So it turns out CD Projekt Red re-confirmed that the game will only support NVIDIA RT-hardware on launch
Witcher 3 Hairworks all over again?

I guess, but if it's DXR implementation and AMD has a DXR driver it should "Just work."
Watch Dogs Legion has DXR bugs on AMD hardware. Reason unknown.

Either way this also bodes very, very poorly for future crossplatform support.
Consoles: the great leveller.

It's entirely possible Nvidia has been allowing out of spec behavior without throwing any warnings or etc. This would hardly be the first time for either to do this.
Seems unlikely in this particular case, because it seems with ray tracing it's really obvious when behaviour doesn't match spec.

And considering the utter disparity between Watchdogs Legion RT performance and Miles Morales on the same console, with fantastically similar settings, min spec targets, and use of RT, it appears the spec itself, while intended to run as a shared API, gives very little idea of good performance behavior between the two archs.
And "conventional" versus "inline" ray tracing performance/behaviour.

For all we know, AMD is always converting "conventional" ray tracing into "inline". We've now seen that "NGG" is now a solid feature of RDNA 2, where the driver is converting D3D 11 geometry pipeline functionality into primitive shaders.

Further it appears incredibly likely that AMDs non fixed traversal structure gives opportunities the spec does not,
Yes, that seems to be the point (why "inline" exists)...

and it's also highly likely at least some of this behavior can't be replicated at all on Nvidia.
Again, this seems unlikely to me: the way ray payloads are defined and the kinds of ray queries that can be performed are all tightly specified.

Of course inline ray tracing may cause register allocation and/or shared memory allocation problems that hit NVidia more than AMD. So performance may fall off a cliff more easily?

I was watching this yesterday:


and noticed that XSX allows "out of spec" group size of 256, beyond the 128 that the spec defines (which annoys the fuck out of me, 256 has historically been a sweet spot on AMD hardware, when it's practical at all). Meanwhile for mesh shading, Turing prefers 32 whereas 256 is a win on AMD.

So here's an example of the programming model (shared memory) behind a feature (mesh shading) that appears to require that developers take a radically different approach for NVidia.

To me these latter two just add up to what Cyberpunk indicates as well. That developers may need to rebuild large parts of their RT pipelines to get good performance on both. That's a big task for a very expensive feature with rather limited uses. One of the biggest limits to visual quality is already dev time; the question I'd just ask of hardware RT is, "is it worth it?"
So, the game has gone "gold" and they're fixing it. Then they're going to fix it more for AMD PC. Then they're going to fix it some more for the latest consoles.

I'm hopeful that Cyberpunk is the last major game where ray tracing is "written first" on NVidia. I expect I'll be wrong and there's another year of this shit to come.

With all that RT effect I assume 30? lol.
With DLSS on.
 
I'm hopeful that Cyberpunk is the last major game where ray tracing is "written first" on NVidia. I expect I'll be wrong and there's another year of this shit to come.
Tunnel vision? What about GodFall? Also, are you talking about RT accelerators or DXR game code? I thought AMD and Nvidia were partners and worked closely with Microsoft in implementing the DXR spec.

Once AMD hardware gets the required patches to run Cyberpunk, it will be interesting to see what rendering comparisons are drawn between Nvidia and AMD regarding utilizing all the rt effects.
 
I thought the general consensus was, that AMD disabled one entire SE.

That appears to be the case since the RX6800 has 96ROPs.
If they wanted to keep the full 128ROPs, it would have been a 64CU bin but that likely would have been far too close in performance to the RX6800XT.
 
Tunnel vision? What about GodFall? Also, are you talking about RT accelerators or DXR game code? I thought AMD and Nvidia were partners and worked closely with Microsoft in implementing the DXR spec.

Once AMD hardware gets the required patches to run Cyberpunk, it will be interesting to see what rendering comparisons are drawn between Nvidia and AMD regarding utilizing all the rt effects.
Of course they worked closely with MS implementing the DXR spec, but that doesn't mean you can't tailor the game for the implementation of your sponsor within that spec.
Just because all cards are built to support specific APIs doesn't mean you can't screw over one manufacturer over the other(s) if you so choose, since each architecture has it's strengths and weaknesses and preferred ways to do things.
And yes, Godfall should just work on NVIDIA too.
 
I'm hopeful that Cyberpunk is the last major game where ray tracing is "written first" on NVidia. I expect I'll be wrong and there's another year of this shit to come.

AMD hasn’t put out any content showing their RT performance in a good light. Seems weird to assume that developers are screwing AMD by optimizing for Nvidia.

Lots of assumptions floating around that AMD’s way is “better” yet we haven’t seen anything to support those assumptions on either PC or console RDNA hardware. If in fact Nvidia’s hardware is capable of higher peak performance it seems like the obvious target for developer focus and optimization on PC.
 
ft1zgj19.png
ft2tdjc9.png
ft3aikio.png
ft4szk64.png
ft55iker.png
ft6iikdc.png



https://www.ixbt.com/3dv/amd-radeon-rx-6800-xt-review.html
 
AMD hasn’t put out any content showing their RT performance in a good light. Seems weird to assume that developers are screwing AMD by optimizing for Nvidia.

Lots of assumptions floating around that AMD’s way is “better” yet we haven’t seen anything to support those assumptions on either PC or console RDNA hardware. If in fact Nvidia’s hardware is capable of higher peak performance it seems like the obvious target for developer focus and optimization on PC.

Plus when you are two years late in the RT gaming market (rtx2080 released in sept 2018), you can't really blame devs for working with what they know and have for some times now...
 
Plus when you are two years late in the RT gaming market (rtx2080 released in sept 2018), you can't really blame devs for working with what they know and have for some times now...

Yeah absolutely that’s a key practical matter. But even putting that aside I don’t know where this angst against Nvidia’s setup is coming from.

Both IHVs support inline and dynamic shader based DXR. Some folks seem to think that inline is inherently better (why?) and that optimizing for AMD would be better for all of us (why?).

If both were true I would have expected AMD to put out stuff that puts Nvidia’s Star Wars demo to shame. They haven’t even come close.
 
Tunnel vision? What about GodFall? Also, are you talking about RT accelerators or DXR game code? I thought AMD and Nvidia were partners and worked closely with Microsoft in implementing the DXR spec.
I'm talking about 3/4 of the platforms (or 3/5 when Intel arrives) will be targetted first :)

Once AMD hardware gets the required patches to run Cyberpunk, it will be interesting to see what rendering comparisons are drawn between Nvidia and AMD regarding utilizing all the rt effects.
It will only tell us what happens when ray tracing is written for a minority platform :)

AMD hasn’t put out any content showing their RT performance in a good light. Seems weird to assume that developers are screwing AMD by optimizing for Nvidia.
Yes, there have been games where AMD was screwed by NVidia-biased optimisations. From now on, though, NVidia is the minority platform.

Lots of assumptions floating around that AMD’s way is “better” yet we haven’t seen anything to support those assumptions on either PC or console RDNA hardware. If in fact Nvidia’s hardware is capable of higher peak performance it seems like the obvious target for developer focus and optimization on PC.
I'm not making assumptions about "better".

Peak performance is no use to anyone. It's worst case frametimes that should be the focus.
 
Yes, there have been games where AMD was screwed by NVidia-biased optimisations. From now on, though, NVidia is the minority platform.

Nvidia has been the minority platform for many years now. That isn’t new.

I'm not making assumptions about "better".

Peak performance is no use to anyone. It's worst case frametimes that should be the focus.

Ok, got it. As a PC gamer I know that developers will target lowest common denominator console hardware. It’s practical but annoying.

Console optimization is a given though so we know that will happen anyway. I will applaud any incremental efforts to support PC hardware to drive progress beyond the baseline. This will become more apparent as PC hardware pulls even further away in the next few years as it does every console generation.
 
AMD hasn’t put out any content showing their RT performance in a good light. Seems weird to assume that developers are screwing AMD by optimizing for Nvidia.
I think the big question is how AMD's solution will handle multiple rt effects simultaneously (like in Cyberpunk), or if shadows and some reflections will dominate.
 
Ok you guys almost made me believe this.
They're still doing GPU reviews!
https://www.anandtech.com/comments/...rgb-optical-mechanical-keyboard-review/730442

They do but also anandtech and heise are going more and more the way that nromal reviewers are going. We have abosultly no synthetic benchmarks about the front end about the last generation and this generation of gpu,s only some hints but not exect values. We have only some hints but that is not enough for are a grounded discussion.....

We do not know how it behave on a detailed level if there are boundrys in caseh etc. You will not know this withouht any synthetic benchmars. This is realy fustrating.

@Rys When do you want relase your beyond3d Suite for everyone? We have realy a lagg in syntehtic benchmarks.

Every discussion is worthless here when we don't have the basics to understand where are the limits of each architecture are.

Its the same with raytracing. We can arguee all the time why nvidia is better than amd in this title and in the other title amd is better. Her also syntethic helps. Maybe one is better with intersection, the other is better in denosing.....

With out fundamental synthetic benchmarks is more guessing then knowing...

:confused::cry:
 
Last edited:
Minecraft got a DXR1.1 update and the 3DMark raytracing feature test is DXR1.1 only. In both cases Ampere is twice as fast.

The 3dmark technical guide says “This test uses features from DirectX Raytracing Tier 1.1 to create a realistic ray-traced depth of field effect.”

DXR 1.1 supports both inline and dynamic shader linking but the guide doesn’t explicitly state which approach it uses. Same for Minecraft.
 
Back
Top