AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

Is that an HBM PHY (x2) at the "south west" side?

The Vega 20 HBM2 PHYs seem different (taller), but it could be.

dMvUowR.png



Though if it's HBM2E then perhaps two PHYs are an overkill? A single HBM2E stack at 3.2Gbps has a 410GB/s bandwidth, which should be more than enough for e.g. a lower clocked / lower-power premium GPU.
 
Strategically it's a risky play that could pan out if they actually don't enable DXR support at all until further past launch if RT acceleration is worse.

Reviewers tend to want to bench apples to apples, this means that if DXR were enabled they'd use RT settings (with a tendency to max settings), with no support it'll be down sans RT. This means launch reviews will show a more favourable performance comparison.

While post launch benchmarks with RT support can be massaged to promote that it can be done and from a user experience stand point instead of a direct performance comparison.

I fully expect RT benchmarking in reviews unless NDA prohibits it, not having it would be a relative disaster for AMD. Much worse than RT performance being slower than equivalently placed NV cards.

And even if prohibited before launch due to review contracts for pre-release hardware, post launch nothing stops a reviewer from purchasing a card and reviewing it. I don't plan to pre-order one either way, so I'd be getting one 2+ months after launch.

Another big benefit when comparing 6800XT and 6800 (I'm not interested in the 6900XT) is that they both come with 16 GB of memory. I think it was a huge mistake for NV not to launch the 3080 and 3070 with 16 GB of memory considering that a new console generation has just started which means that upcoming AAA games will be utilizing more memory.

The limited 10 GB of memory on the RTX 3080 is why I didn't buy one and never really seriously considered getting it.

Regards,
SB
 
2.25 GHz * 512 bytes = 1152 GB/s.

1152 GB/s + 512 GB/s = 1664 GB/s

1664 GB/s / 768 GB/s = 2.16666 ~ 2.17
So AMD is assuming the maximum boost clock as the value where to calculate the memory bandwidth from, instead of game clock.. Hum..
 
All the concern "trolling" about hypothetical RT performance is giving me diarrhea...
anyways... the only official figure is the following:

Am I a troll for desiring a high-end GPU from AMD which can compete with Nvidia in all metrics? Am I a troll for not wanting to replace my 2080 Ti with a card that is faster on some fronts and slower on others?

I sold my 2080 Ti back in August when it became clear that the successor was finally right around the corner. I had hoped for a 20GB 3080 as 10GB VRAM is insufficient for my needs, but the lack of announcements of such a product and the recent rumor that these cards will never be released were enough to cause me to go out and buy another 2080 Ti to hold me over. So I'm back where I started, but plus a few hundred dollars due to profits from the sale of my original card.

Look, I know it's tempting to think that every person who bothers to opine on these matters is some kind of fanboy troll, but that's not always the case. Especially on this forum. This isn't Reddit, it's not a youtube comments section, it's not the wccftech comments section. There are smart, reasonable people here that make up the majority of posters on this forum.

We don't yet know the RT performance of the RX 6000 series, and as a potential customer for this card, that is enough to make me skeptical. Enough so that I will wait until 3rd party reviews, at least.
 
It didn't? 6800XT trading blows with 3080 and 6800 easily beating 2080Ti/3070.
Edit- according to AMD numbers.

The theory was that 3080 is compute heavy but doesn't scale as well lower resolutions like 1440p, so a high clocked rdna2 with more rops and more geometry performance would beat it handily at 1440p and 1080p. 1440p numbers looked roughly equal, and 4k numbers looked roughly equal, so it seems like the 6800xt scales pretty similarly with resolution to the 3080.
 
The theory was that 3080 is compute heavy but doesn't scale as well lower resolutions like 1440p, so a high clocked rdna2 with more rops and more geometry performance would beat it handily at 1440p and 1080p. 1440p numbers looked roughly equal, and 4k numbers looked roughly equal, so it seems like the 6800xt scales pretty similarly with resolution to the 3080.
bandwidth is the large factor here that allows it to keep up. Curious to see what happens over time.
 
MS kinda confirmed that a DLSS like feature is on the works.

"Through close collaboration and partnership between Xbox and AMD, not only have we delivered on this promise, we have gone even further introducing additional next-generation innovation such as hardware accelerated Machine Learning capabilities for better NPC intelligence, more lifelike animation, and improved visual quality via techniques such as ML powered super resolution."
 
I fully expect RT benchmarking in reviews unless NDA prohibits it, not having it would be a relative disaster for AMD. Much worse than RT performance being slower than equivalently placed NV cards.

And even if prohibited before launch due to review contracts for pre-release hardware, post launch nothing stops a reviewer from purchasing a card and reviewing it. I don't plan to pre-order one either way, so I'd be getting one 2+ months after launch.

Another big benefit when comparing 6800XT and 6800 (I'm not interested in the 6900XT) is that they both come with 16 GB of memory. I think it was a huge mistake for NV not to launch the 3080 and 3070 with 16 GB of memory considering that a new console generation has just started which means that upcoming AAA games will be utilizing more memory.

The limited 10 GB of memory on the RTX 3080 is why I didn't buy one and never really seriously considered getting it.

Regards,
SB

Like I said it's risky but it could be an effective strategy at managing the message being sent out.

Initial reviews will be the most favourable numbers due to being sans any direct RT comparisons. You also don't need a NDA necessarily to enforce this as it can be done purely via driver support (or lack of support in this case). Launch reviews have the highest viewer volume. Anyone searching casually in the future will be sent to launch reviews. Initial impressions just tend to be stronger.

There can be a firm commit to when RT support will be enabled (since it's an artificial delay anyways) and therefore testing will be done but at a later date. There wouldn't be any questions as to whether or not support will come. Another advantage here is they could also wait for a favourable AMD RT game (at least to some extent). RT still at the moment isn't generally seen to have reached a critical point yet, but is growing in terms of future impact so there is some time elasticity here to work with.
 
128 ROPs:

https://www.amd.com/en/products/specifications/compare/graphics/10516,10521,10526

Ooh, and 96 ROPs for RX 6800 (one shader engine disabled?)
Perhaps from the earlier code commits, there's a few shader arrays inactivated?
That could leave the SE's enabled, albeit with some possible load-balancing changes needed.


All the concern "trolling" about hypothetical RT performance is giving me diarrhea...
anyways... the only official figure is the following:
I'm trying to search for RTX numbers on that data set, to get a ballpark idea of their relative positioning and to know how good/bad that software fallback baseline is.

Is that an HBM PHY (x2) at the "south west" side?
Hard to tell from an artistic rendering, although I think an HBM tends to be larger than that. Maybe another interface, like PCIe? The other side doesn't seem to have a solid interface block besides the display-related ones.

I think we can actually calculate how much bandwidth there is in the Infinity Cache, from this slide:


If we assume they're talking about 16Gbps, then the "1.0X 384bit G6" means 768GB/s and the "256bit G6" is 512GB/s.
If the Infinity Cache is 2.17x the 384bit G6, then its output is 1666.56GB/s. Take away the 512GB/s from the 256bit G6 and we get 1154.56GB/s for the Infinity Cache alone.
I'm guessing this is an odd number because this LLC is working at the same clocks as the rest of the GPU.. maybe they're using the 2015MHz game clock.

I thought it would be more straightforward for the data fabric if the memory controllers fed into the cache or shared a memory stop. Having both types feeding into the fabric at the same time would require more stops on the fabric for not much gain since missing the cache automatically gave a cycle to the memory controller.

I did find this footnote from the RDNA2 page:
(edited to correct number)
3. Measurement calculated by AMD engineering, on a Radeon RX 6000 series card with 128 MB AMD Infinity Cache and 256-bit GDDR6. Measuring 4k gaming average AMD Infinity Cache hit rates of 58% across top gaming titles, multiplied by theoretical peak bandwidth from the 16 64B AMD Infinity Fabric channels connecting the Cache to the Graphics Engine at boost frequency of up to 1.94 GHz. RX-535
This could be achieved with separate feeds, but the max bandwidth of the cache has enough of a shadow that a GDDR6 controller sharing the same stop could fill in the missing 512GB/s.
This does seem to point to a different clock domain for the fabric and cache, since the GPU boost and game frequencies aren't capped at 1.94.
 
Am I a troll for desiring a high-end GPU from AMD which can compete with Nvidia in all metrics? Am I a troll for not wanting to replace my 2080 Ti with a card that is faster on some fronts and slower on others?

I sold my 2080 Ti back in August when it became clear that the successor was finally right around the corner. I had hoped for a 20GB 3080 as 10GB VRAM is insufficient for my needs, but the lack of announcements of such a product and the recent rumor that these cards will never be released were enough to cause me to go out and buy another 2080 Ti to hold me over. So I'm back where I started, but plus a few hundred dollars due to profits from the sale of my original card.

Look, I know it's tempting to think that every person who bothers to opine on these matters is some kind of fanboy troll, but that's not always the case. Especially on this forum. This isn't Reddit, it's not a youtube comments section, it's not the wccftech comments section. There are smart, reasonable people here that make up the majority of posters on this forum.

We don't yet know the RT performance of the RX 6000 series, and as a potential customer for this card, that is enough to make me skeptical. Enough so that I will wait until 3rd party reviews, at least.

The RTX 3090 is better on all fronts than the 2080ti. Whether or not one wants to pay for it is another matter.

The reality is if AMD was clearly better on all fronts the price would also be up there to match.
 
The RTX 3090 is better on all fronts than the 2080ti. Whether or not one wants to pay for it is another matter.

The reality is if AMD was clearly better on all fronts the price would also be up there to match.

The price and the distinct lack of RT performance metrics does lend credence to the idea that the RT performance will be less than the competition. The question then becomes: how much less?

AMD has demonstrated that they are not afraid to price class-leading products higher than the competition, the Ryzen 5000 family prices are the most recent example.
 
I can't find a PCI Express interface on any die shot that looks anything like that... Navi 10, Navi 14 are both PCI Express 4 so would notionally be the same...

The Vega 20 HBM2 PHYs seem different (taller), but it could be.

dMvUowR.png



Though if it's HBM2E then perhaps two PHYs are an overkill? A single HBM2E stack at 3.2Gbps has a 410GB/s bandwidth, which should be more than enough for e.g. a lower clocked / lower-power premium GPU.
As Techpowerup likes to say, Navi 21 is a "CGI" die shot... But this shot of Vega 7:


also shows "tall" PHYs. The CGI die shot shows lots of "black" areas that we know aren't "empty", so "area usage" cannot be determined from the CGI shot.

I'm thinking there's going to be a professional variant of this card, with 32GB of HBM...
 
So AMD is assuming the maximum boost clock as the value where to calculate the memory bandwidth from, instead of game clock.. Hum..
AMDs clocks are more "real" compared to Nvidias. 5700 XT boost is 1905Mhz and real clock median at gaming is 1890Mhz. The same for 9.7TFlops of 5700 XT.

Interesting times. Wellcome back AMDs competition in the highend, and hopefully Intel jumps too next year, better for all of us.

Regarding RT, Igorslab which is not AMD fanboy showed some results few days back, and RX 6800 was behind ampere yet better than 2080 Ti, so will it be enough for you and your game/settings? Maybe its slower with complex RT and faster with simpler ones. The same but on the other side goes for memory, 8GB or 10GB will be enough? And good to hear AMD is working on open source DLSS kind SSAA.

Perf/watt is better than ampere but not by much, less than 10% better for 6800XT vs 3080, 15-20% 6900XT vs 3090, and probably a match between 3070 and vanilla 6800 cause 6800 should be 10-15% faster. Curiosly, the highest one 6900XT will be the best of 3 in that metric, just the opposite of Nvidia where RTX3070 is the most efficient. Sure midrange models from both arks will be even better in a few months time when they launch. Hopefully RX 6700 XT is around 400 USD cause thats still my target with close to 3070 performance, 12GB and less than 200W...maybe 6800 is limited and mostly for filling that gap a few months and thats why its price doesn´t look so competitive.
 
Last edited:
Back
Top