AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

Watch Gamers Nexus' review instead. Hardware Unboxed isn't exactly an unbiased source...

Why do you say that? HUB always came across as fair and balanced to me.

I'm making my way through all the big sites and they story seems pretty consistent so far. Very strong rasterization performance, great power efficiency and noise levels. Slower but competitive at 4K and significantly behind on RT.
 
So the question is , will ray tracing performance improve with new drivers or is amd that far behind ?
I guess it should be both: ray tracing performance will improve with new drivers (or perhaps on new games that were made with RDNA2 RT hardware in mind), but we should also expect for nvidia's 2nd generation of RT to be more effective than AMD's 1st generation.

My problem is now how much excess RT (with hardly perceptible IQ differences) nvidia will be trying to force into newer games, just to get a higher performance delta from their competition in benchmarks.
It's exactly what they did with excess geometry / tessellation for years (Crysis' hyper-detailed concrete slabs, Witcher 3's subpixel triangles for hair strands, etc.) and I fully expect them to use the same tactics.

I hope AMD's stronger influence on game development might hamper this somewhat, but nvidia does seem to have a very strong grip on developer relations.
 
Why does AMD have only 4 Rasterizer
arch1.jpg

https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/images/arch1.jpg

But when you look into the driver it has 8 scanconverter ????

 
Last edited:
We did some things with the 3DMark Feature test: With increasing sample count the gap between RX 6800 XT and RTX 3070 lessens. edit: I should add, that in traditional rasterization, there's another gap, in the opposite direction and it's b i g.

Here are the numbers. Be mindful, the Feature Test is an almost purely raytraced scene with denoising and thus acceptable IQ only in still images.
 

Attachments

  • 3DMArk.PNG
    3DMArk.PNG
    14.8 KB · Views: 37
Last edited:
4K is really poor. The Infinity Cache "virtual 2TB/s" bandwidth is even more wasted than 5700XT's 448GB/s. Wow.

arch7.jpg

I think this explains it. AMD said 58% cache hit rate at 4K and it looks like ~75% at 1080p and ~67-68% at 1440p so it's a significant drop off at 4K.

I had earlier speculated that N22 would probably have around 96 MB of infinity cache and be targeted at 1440p. From this graph I'm getting a feeling that it seems likely. N23 could be 32MB or even 48 MB as the hit rate with 32 MB seems low (Unless it's targeted only at 1080p). Of course this chart could be specific to N21 and it's configuration and might be totally different for the other chips.
 
Last edited:
I guess it should be both: ray tracing performance will improve with new drivers (or perhaps on new games that were made with RDNA2 RT hardware in mind), but we should also expect for nvidia's 2nd generation of RT to be more effective than AMD's 1st generation.

My problem is now how much excess RT (with hardly perceptible IQ differences) nvidia will be trying to force into newer games, just to get a higher performance delta from their competition in benchmarks.
It's exactly what they did with excess geometry / tessellation for years (Crysis' hyper-detailed concrete slabs, Witcher 3's subpixel triangles for hair strands, etc.) and I fully expect them to use the same tactics.

I hope AMD's stronger influence on game development might hamper this somewhat, but nvidia does seem to have a very strong grip on developer relations.
Crysis 2's hyper tessellated concrete slabs is an absolute gaming myth. Demonstrably so. I have written about it on my twitter and on Resetera and NeoGAF before. I have even had developers of the game confirm it.
 
I guess it should be both: ray tracing performance will improve with new drivers (or perhaps on new games that were made with RDNA2 RT hardware in mind), but we should also expect for nvidia's 2nd generation of RT to be more effective than AMD's 1st generation.

My problem is now how much excess RT (with hardly perceptible IQ differences) nvidia will be trying to force into newer games, just to get a higher performance delta from their competition in benchmarks.
It's exactly what they did with excess geometry / tessellation for years (Crysis' hyper-detailed concrete slabs, Witcher 3's subpixel triangles for hair strands, etc.) and I fully expect them to use the same tactics.

I hope AMD's stronger influence on game development might hamper this somewhat, but nvidia does seem to have a very strong grip on developer relations.

Guess it will come down to how much they can improve ray tracing and how good of a version of dlss they can make
 
Sounds like an architecture issue. What's the chance this will be an issue with Intel gpu's?
 
View attachment 4939

I think this explains it. AMD said 58% cache hit rate at 4K and it looks like ~75% at 1080p and ~67-68% at 1440p so it's a significant drop off at 4K.

I had earlier speculated that N22 would probably have around 96 MB of infinity cache and be targeted at 1440p. From this graph I'm getting a feeling that it seems likely. N23 could be 32MB or even 48 MB as the hit rate with 32 MB seems low (Unless it's targeted only at 1080p). Of course this chart could be specific to N21 and it's configuration and might be totally different for the other chips.

If you look closely the marks are at 128, 96 and 64MBs So I think AMD gave us an spoiler there.

Which are the sweet spot(when the curve flattens) for 1440p and 1080p.
 
Its a performance issue. nVidia's RT Cores offloading more work from the shaders, have their own caches and Ampere has twice the triangle intersection performance. And 3080 and 3090 have 50% to 70% more off chip bandwidth. In the end Raytracing is brute force. And nVidia is brute forcing their way through it...
If you simply use brute force you won't get very far..(ahh, if it were that easy..)
 
Back
Top