AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

What we also know, that navi 21 is now converting every rasterizer step to primitive shader and you get a 80% performance boost.

@CarstenS use dx11 software to test. Maybe you can use the new raster pipeline only in Dx p12 or Vulcan?



tl;dr: Does this mean RDNA1/2 finally have working Primitive Shaders unlike Vega?
 
tl;dr: Does this mean RDNA1/2 finally have working Primitive Shaders unlike Vega?

Primitive shaders have been working fine in RDNA1 as well, it is just not as beneficial to do culling in primitive shaders because the overall expected rasterization performance (and hence the triangle throughput you need to achieve) on the RDNA1 GPUs is lower. Remember culling in primitive shaders saves work for a fixed function unit but costs you some shader cycles which is a tradeoff that is not worth it if you're not triangle throughput limited.

Now the driver can decide to do primitive shaders without shader based culling and that should deliver similar performance to the old VS stages, but I guess what to do there is up to the driver.
 
I think the 128 is a number for retiring pixels, i.e. RB+ throughput.
Yes, that's consistent with what I wrote. It means the back end (pixel output) of the scan converters total 128 pixels.

no it’s not that’s the funny thing in all drivers you have 8 scan converter.... Linux driver have the Same values. 4 was e mistake from the leaker. Also navi10 should have 2 rasterizer if we compare.

maybe diagram is wrong?
The diagram is correct, albeit high level.

Wait a second. That's not what I said. I said I'm seeing numbers for 6800XT that align almost perfectly with 64 rasterized pixels per clock. 128 depth writes per clock and 256 depth rejects per clock.
What render target format did you use?
 
The test writes into a standard 32-bit color FB, nothing too fancy here. Worked quite well for almost any architecture. Is there a Nav21 specialty I should be aware of?
 
Ignore Ampere.

Look at performance relative to 5700XT. From as low as 30% faster to about 130%, averaging somewhere around 70% faster (50, 67, 86% faster at 1080p, 1440p, 4K respectively at Hardware Unboxed).

Just to clarify I didn't mean no scaling differences, just that it's another factor compounding the impression.

I notice some quite impressive multi monitor idle power draw (with 7 mhz memory, which only gives a quarter of the required bandwith). https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/31.html
I guess that's a strong hint that they are simply presenting the screen from infinity cache. Maybe basicly running the gpu with the infinity cache as main memory - I wonder how much they can extend this to other "2d" usage scenarios. Obviously they are not doing it for video playback right now, where video memory is running full! speed (see the same page above).

Also it seems that the (at least) 64mb requirement for 2*4k is too much for this mode: https://www.computerbase.de/2020-11..._leistungsaufnahme_desktop_youtube_und_spiele (again probably falling back to full speed instead of some 2d memory clock)

When things settle a bit I'd like to see maybe some tests to "break" infinity cache for a better lack of a term. I feel it's a novel alternative to raw bandwidth but I'm wondering what other limitations (as in scaling) there are outside of just resolution.

For instance with this information it seems like behavior changes with 1x vs 2x monitors at least in terms of power consumption presumably due to where the GPU needs to fetch data. Would this also have a performance impact hypothetically in multitasking scenarios especially with dual monitors? Personally speaking I'm a dual monitor user and multitasker even with GPU usage while gaming nowadays so it's something that would be relevant to me. But outside of myself I think that scenario is rather common these days such as having perhaps video (youtube, stream) open while gaming.

The other thing is are we confident that future memory requirements won't also have an impact? If datasets were to grow it would mean the hit rate would correspondingly drop. At least I'm assuming (unless there's clear information otherwise?) that the amount of data in IC is not solely a factor of display resolution.

Is NVIDIA still bringing more FE's? Since FE is the only 3070 ever sold at MSRP, AIB cards are way higher, and it's FE was so limited only few select NVIDIA storefronts had any to sell (not betting my head on this, but I think it was mentioned somewhere only two European NVIDIA stores had any)
AMD has confirmed to produce references 'till sometime in Q1/21 and is expected to re-stock their own store too (which sells at MSRP guaranteed)

As far as I know Nvidia is also restocking FEs. But I think the reality of the situation is that for both AMD and Nvidia the reference cards are likely much lower margin actually than them providing to AiBs for customs (at least for retail DIY channel) and are more so there to fill certain targets. The heatsinks are and designs are more expensive essentially despite being the cheaper than AiB customs. Take the 6800/XT for example, they both use vapor chambers, and I wouldn't be surprised they're the only ones despite being the cheapest price wise against AiBs. While Nvidia seems to also want a specific type of design language differentiation from the rest of the market that drives up costs (that heavy ascent "gamer" design with AiBs is actually simpler and cheaper).

I wouldn't describe it as a mistake. I'm sure they would have loved to show something if it were possible. The reality is that MLSS is extremely hard to do. God knows how long Nvidia with all their expertise in ML and the tensor cores to rely on were working on DLSS before it launched. And even then it took another year before it was useable.

There's an advantage with being second to market for something that's going to be more subjective like this in that AMD's solution doesn't actually need to really match Nvidia's, it just needs to be a perceived alternative. For instance some people already consider CAS as a viable alternative to DLSS. As we can see with that DLSS thread it's somewhat opinionated in terms of just how much better DLSS is.

If anything I wonder if a wide implementation with a tradeoff in IQ and perf gains would actually be the more preferable.

@Frenetic Pony Most of the games released with DXR do not support DXR 1.1, which has some significant changes. I expect there to be some changes in how RT is used and how it performs going forward, especially as console devs dig into it and it's not just relegated to niche hardware. I'm sure the performance hit will remain large, but adoption should be pretty widespread.

I'm actually curious to see if there are different performance deltas between AMD and Nvidia as more games adopt DXR 1.1, which AMD seemed to be heavily involved in.

I'm just going to risk it but once things settle there might need to be an examination to confirm if there are any IQ differences with respect to implementation as opposed to just performance.
 
Lame strawman. The swedish prices were not scalped but rather AMD being douchebags (just like Nvidia) at this point i suspect u must be astroturfing here. AMD is not some angelic it upstart. Its a money grubbing company that likes its margins just like Apple, Intel and Nvidia
(...)
https://www.sweclockers.com/test/30790-amd-radeon-rx-6800-och-rx-6800-xt/13

google translate the last page. AMD left the stores here no margins so they could not offer msrp prices
Yes poor stores who can't get enough margins out of new video cards. I guess that's why stores like Alternate were carrying a modest 75% premium over MSRP on these cards, right?

wjBK9LF.jpg






PC hardware components never had good margins for retailers, this is nothing new. These stores don't make the bulk of their money selling CPUs and GPUs, those products always had ridiculous margins of ~5% to the store, because these stores make the bulk of their money on peripherals, accessories, external storage, technical support, etc.
If they want to charge more then they can, and if they want to complain about others selling at MSRP then they should complain about their own competitors who choose to follow MSRP.

This is only happening because some stores think they can still push AMD around as if they were the distant underdogs of 2016 and they want to justify trying to take a bigger piece of the pie during this high-end graphics card craze. They got really happy with the ridiculous prices they charged for GPUs during the mining craze of 2017-2018 and they want that again because demand is similar at the moment.

Complaining now about the low margins that AMD assumes they'll have (i.e. the same they've had for the last 20 years) is just hypocrisy. They don't need to sell GPUs at all. Their only problem is if they don't put RTX30 and RX 6000 cards on their store fronts and front pages, no one will buy their $80 RGB mousepads.



Entire AMD subreddit = 1 post with 8 upvotes plus 3 agreeable comments.
Wow, he really showed 'em!
 
FYI, today on Amd's Euro site there was a second stock of cards, there went out in minutes too, but some other users were able to purchase them at MRSP.
 
Honestly, considering shortages are expected to last for months, I don't really fault shops for selling above msrp. Clearly the demand is there and will pay well above msrp right now. I'm sure with covid19 business is hurting for retailers, so this is an opportunity for them to pay the rent etc.
 
Honestly, considering shortages are expected to last for months, I don't really fault shops for selling above msrp. Clearly the demand is there and will pay well above msrp right now. I'm sure with covid19 business is hurting for retailers, so this is an opportunity for them to pay the rent etc.
Covid hurt a lot of retailers, but not all of them. PC hardware stores - namely the ones with an e-store front, have had record setting revenues this year.
It's not hard to conclude that people who need to stay at home will tend to spend less money on outdoor activities and more money on stuff at home. Sales for monitors, laptops, desktops, hardware components, gaming mice, etc. soared this year.

They're selling above MSRP because they can, but now they can't complain if the relationship with their suppliers will be waning for a while.
E.g. AMD already took out Alternate.de (the store that started selling their RX 6800XT at 1049€) from their official list of recommended retailers.
 
The test writes into a standard 32-bit color FB, nothing too fancy here. Worked quite well for almost any architecture. Is there a Nav21 specialty I should be aware of?
Thank you for all the informations.
How does it relate to Navi10? Have Navi10 and Nav21 the same value?

Here is also a intresting News. I think now primitive shaders are now working like intend:

 
Updated system requirements. Looks like the 6800/6800XT/3070/2080TI would be enough for "Ultra" RT at 1440p

TBH I think this is very "optimistic" about HW performance. We will have to wait 3 weeks to find out. One thing is for sure, I won't play the game until I can run it at ultra.

Edit: Image was too big.


Edit2: Looks like it will be very optimized for Nvidia...

 
Last edited:
Updated system requirements. Looks like the 6800/6800XT/3070/2080TI would be enough for "Ultra" RT at 1440p

The non RT requirements are surprisingly low considering how this game looks. I was expecting it to bring high end PC's to their knees given the old info we had about a 2080Ti barely managing 30fps at 1080p using DLSS.

Come to think of it, I wonder if the lack of an AMD recommendation with RT could be down the RT being so demanding that it needs to be run with DLSS for a decent frame rate. Probably not, but it's a possibility.

EDIT: you've really got to appreciate the level of scaling built into this game. It can run on a GTX 780 at the low end but requires an RTX 3080 at the highest end!
 
Last edited:
The non RT requirements are surprisingly low considering how this game looks. I was expecting it to bring high end PC's to their knees given the old info we had about a 2080Ti barely managing 30fps at 1080p using DLSS.

That's why I said I find them "optimistic" I think the low end is for basic looks at barely 30FPS or they did some kind of black magic here...

Come to think of it, I wonder if the lack of an AMD recommendation with RT could be down the RT being so demanding that it needs to be run with DLSS for a decent frame rate. Probably not, but it's a possibility.

I think this is more a consequence of it being sponsored RTX than anything else. The 6800/XT are on par with the 3070 on RT and very superior than the...2060?? So why they are being ignored? Nivida's marketing at its best.
 
I think this is more a consequence of it being sponsored RTX than anything else. The 6800/XT are on par with the 3070 on RT and very superior than the...2060?? So why they are being ignored? Nivida's marketing at its best.

That's what made me think those recommendations are possibly using DLSS, but even that wouldn't make sense as the 6800 should be as fast or faster in RT as a 2060 using DLSS.

Has it been confirmed that the game will support RT on AMD GPU's from day one?
 
Back
Top