Is that an HBM PHY (x2) at the "south west" side?
Strategically it's a risky play that could pan out if they actually don't enable DXR support at all until further past launch if RT acceleration is worse.
Reviewers tend to want to bench apples to apples, this means that if DXR were enabled they'd use RT settings (with a tendency to max settings), with no support it'll be down sans RT. This means launch reviews will show a more favourable performance comparison.
While post launch benchmarks with RT support can be massaged to promote that it can be done and from a user experience stand point instead of a direct performance comparison.
So AMD is assuming the maximum boost clock as the value where to calculate the memory bandwidth from, instead of game clock.. Hum..2.25 GHz * 512 bytes = 1152 GB/s.
1152 GB/s + 512 GB/s = 1664 GB/s
1664 GB/s / 768 GB/s = 2.16666 ~ 2.17
Is that an HBM PHY (x2) at the "south west" side?
All the concern "trolling" about hypothetical RT performance is giving me diarrhea...
anyways... the only official figure is the following:
It didn't? 6800XT trading blows with 3080 and 6800 easily beating 2080Ti/3070.
Edit- according to AMD numbers.
bandwidth is the large factor here that allows it to keep up. Curious to see what happens over time.The theory was that 3080 is compute heavy but doesn't scale as well lower resolutions like 1440p, so a high clocked rdna2 with more rops and more geometry performance would beat it handily at 1440p and 1080p. 1440p numbers looked roughly equal, and 4k numbers looked roughly equal, so it seems like the 6800xt scales pretty similarly with resolution to the 3080.
I fully expect RT benchmarking in reviews unless NDA prohibits it, not having it would be a relative disaster for AMD. Much worse than RT performance being slower than equivalently placed NV cards.
And even if prohibited before launch due to review contracts for pre-release hardware, post launch nothing stops a reviewer from purchasing a card and reviewing it. I don't plan to pre-order one either way, so I'd be getting one 2+ months after launch.
Another big benefit when comparing 6800XT and 6800 (I'm not interested in the 6900XT) is that they both come with 16 GB of memory. I think it was a huge mistake for NV not to launch the 3080 and 3070 with 16 GB of memory considering that a new console generation has just started which means that upcoming AAA games will be utilizing more memory.
The limited 10 GB of memory on the RTX 3080 is why I didn't buy one and never really seriously considered getting it.
Regards,
SB
Perhaps from the earlier code commits, there's a few shader arrays inactivated?128 ROPs:
https://www.amd.com/en/products/specifications/compare/graphics/10516,10521,10526
Ooh, and 96 ROPs for RX 6800 (one shader engine disabled?)
I'm trying to search for RTX numbers on that data set, to get a ballpark idea of their relative positioning and to know how good/bad that software fallback baseline is.All the concern "trolling" about hypothetical RT performance is giving me diarrhea...
anyways... the only official figure is the following:
Hard to tell from an artistic rendering, although I think an HBM tends to be larger than that. Maybe another interface, like PCIe? The other side doesn't seem to have a solid interface block besides the display-related ones.Is that an HBM PHY (x2) at the "south west" side?
I think we can actually calculate how much bandwidth there is in the Infinity Cache, from this slide:
If we assume they're talking about 16Gbps, then the "1.0X 384bit G6" means 768GB/s and the "256bit G6" is 512GB/s.
If the Infinity Cache is 2.17x the 384bit G6, then its output is 1666.56GB/s. Take away the 512GB/s from the 256bit G6 and we get 1154.56GB/s for the Infinity Cache alone.
I'm guessing this is an odd number because this LLC is working at the same clocks as the rest of the GPU.. maybe they're using the 2015MHz game clock.
This could be achieved with separate feeds, but the max bandwidth of the cache has enough of a shadow that a GDDR6 controller sharing the same stop could fill in the missing 512GB/s.3. Measurement calculated by AMD engineering, on a Radeon RX 6000 series card with 128 MB AMD Infinity Cache and 256-bit GDDR6. Measuring 4k gaming average AMD Infinity Cache hit rates of 58% across top gaming titles, multiplied by theoretical peak bandwidth from the 16 64B AMD Infinity Fabric channels connecting the Cache to the Graphics Engine at boost frequency of up to 1.94 GHz. RX-535
Am I a troll for desiring a high-end GPU from AMD which can compete with Nvidia in all metrics? Am I a troll for not wanting to replace my 2080 Ti with a card that is faster on some fronts and slower on others?
I sold my 2080 Ti back in August when it became clear that the successor was finally right around the corner. I had hoped for a 20GB 3080 as 10GB VRAM is insufficient for my needs, but the lack of announcements of such a product and the recent rumor that these cards will never be released were enough to cause me to go out and buy another 2080 Ti to hold me over. So I'm back where I started, but plus a few hundred dollars due to profits from the sale of my original card.
Look, I know it's tempting to think that every person who bothers to opine on these matters is some kind of fanboy troll, but that's not always the case. Especially on this forum. This isn't Reddit, it's not a youtube comments section, it's not the wccftech comments section. There are smart, reasonable people here that make up the majority of posters on this forum.
We don't yet know the RT performance of the RX 6000 series, and as a potential customer for this card, that is enough to make me skeptical. Enough so that I will wait until 3rd party reviews, at least.
This picture does not make sense.
It's missing two DCUs.
The RTX 3090 is better on all fronts than the 2080ti. Whether or not one wants to pay for it is another matter.
The reality is if AMD was clearly better on all fronts the price would also be up there to match.
I can't find a PCI Express interface on any die shot that looks anything like that... Navi 10, Navi 14 are both PCI Express 4 so would notionally be the same...More like a PCI-E SERDES.
Larger die-shot here: https://images.anandtech.com/doci/16202/Die-Shot_Color-Front.jpg
As Techpowerup likes to say, Navi 21 is a "CGI" die shot... But this shot of Vega 7:The Vega 20 HBM2 PHYs seem different (taller), but it could be.
Though if it's HBM2E then perhaps two PHYs are an overkill? A single HBM2E stack at 3.2Gbps has a 410GB/s bandwidth, which should be more than enough for e.g. a lower clocked / lower-power premium GPU.
AMDs clocks are more "real" compared to Nvidias. 5700 XT boost is 1905Mhz and real clock median at gaming is 1890Mhz. The same for 9.7TFlops of 5700 XT.So AMD is assuming the maximum boost clock as the value where to calculate the memory bandwidth from, instead of game clock.. Hum..
Navi 21 features xGMI links as the Vega 20 does. Can it be related?Is that an HBM PHY (x2) at the "south west" side?