AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

Also very interesting to see the choose to not segment cards with less Vram. I don't quite understand why the 3 of them have the same bandwidth and size. If 16GB with 128MB are enough for the 6900XT, they are over kill for the 6800. I don't get it.

There is probably no need whatsoever to disable any of the cache for yields (as it comes with built-in extra redundancy).

Yeah but they could use 12GB and 192 bit. Maybe it's cheaper for them to just use the same PCB for all 3 variants and just adjust the power delivery?

I agree, given the drop in compute, it would make sense to me for 6800 to drop two chips. They could drop the price a little, and it would make their lineup better. Even if they are using the exact same PCB, there's no reason why they wouldn't be able to just not fill two spots.

Putting 16GB into every model is a deliberate statement.
 
Yeah but they could use 12GB and 192 bit. Maybe it's cheaper for them to just use the same PCB for all 3 variants and just adjust the power delivery?

There should be indeed in the future a 192 bit bus part, as there were a lot of rumors about a "Navi21 XE" part. But I think that by using the higher rated PCB of its "greater sisters" the card has been easier to validate and launch. Moreover, I suspect that the 6800 will be heavily overclockable (more in percentage respect to the other Navi21 SKUs) if not locked y hard power limits - and thus it may have a greater value for some than what was presented today.
 
The "GPU accelerated decompression" is just shaders running on ALUs, not dedicated hardware or some such, and we've had GPU accelerated decompressors forever.
The point was that it's not "nvidia exclusive" or even anything NVIDIA did first, as your post seemed to suggest.

I don't know what you think we're talking about but realtime GPU based IO decompression is absolutely a new thing in RTX-IO. I was hoping it wouldn't be an Nvidia exclusive feature but based on the information released by AMD so far, it looks like it might be. i.e., Directstorage support or not, AMD based systems may still have to decompression the IO stream on the CPU before it can be used. I hope I'm wrong.
 
RTX 3080 TUF 631 FPS

wvf9qmt2tk03.png

So better than Turing an less than Ampere - as suspected.
 
.... If 16GB with 128MB are enough for the 6900XT, they are over kill for the 6800. I don't get it.

No ? You can benefit from large amount of vram with lower raw power. Some games use lot of vram even if not at 4k...
 
I don't know what you think we're talking about but realtime GPU based IO decompression is absolutely a new thing in RTX-IO. I was hoping it wouldn't be an Nvidia exclusive feature but based on the information released by AMD so far, it looks like it might be. i.e., Directstorage support or not, AMD based systems may still have to decompression the IO stream on the CPU before it can be used. I hope I'm wrong.
RTX IO is nothing but NVIDIAs fancy name for their DirectStorage support and unless I've completely misunderstood something major, only new thing is the DirectStorage API from Microsoft. Even if RDNA1 missed the support AMD could map and access pretty much any media through HBCC and NVIDIA had their GPUDirect before Ampere too. Realtime decompression isn't anything new for GPUs, the fact that the data might originate from SSD instead of going through VRAM shouldn't affect the decompression part at all.
Don't be so sure it will always be done on shaders
Shaders, tensors, whatever units in the GPU do it is irrelevant unless they add dedicated hardware for it.
 
Next year can be a very good year for mobile gaming. put 128 or even 256megs of infinty cache on an apu for mobile and it could compete with a discrete card


Also makes you wonder about an xbox series s portable ...
 
No ? You can benefit from large amount of vram with lower raw power. Some games allocated lot of vram even if not at 4k...

;).

The question was about segmenting and bandwidth. Having the same bandwidth in a card with 80 CUs and 60 CUs it's weird. Also having the same amount of ram in a card of 1000 and 580 is also very weird...
 
I'm confused about the frontend.

Ifi follow Redeit Post i think we have 8 Rasterizer, because of 10 Cu/s per Shaderunit, what implys that there should be 8 Shaderunits. Each Shaderunit have normaly it's own rasterizer. Also the half of the RBs per Shaderengine is an indicator. But when i look at the picture i see only 4 Rasterizer (red)


GPU_Reddit325e50142b72537d.jpg


Navi21.png


Reddit MACOs Leak:
 
Last edited:
Supposedly they were bound by PCIe BAR limitation. Perhaps now with Zen 3 platform, they now run Infinity Fabric over the PCIe x16 when an AMD GPU (with XGMI?) is detected, and maps the GPU local memory into the system physical address space through IF mechanisms instead. :???:
Regarding "Smart Memory Access", most AMD GPUs up to now have shipped with a default 256MB "aperture" VRAM BAR meaning the CPU can only directly access 256MB of VRAM. And that gets exposed in Vulkan/DX12 as a separate memory heap marked as "Device Local + Host Visible", even though technically it's a subset of the main device local VRAM heap.

But AMD dGPUs going back all the way to the OG GCN (GFX6: Tahiti, Verde, etc.) actually support PCIE BAR resizing up to the full size of VRAM (assuming it's a power-of-two size). There was a Linux kernel patch to support it a few years ago, if you search for "amdgpu_device_resize_fb_bar". Perhaps AMD have finally got this working on Windows now, but for some reason only on Zen3 and/or PCIE Gen4 boards?

Some game engines may already be able to take advantage of CPU access to all of VRAM. But other games might need a minor tweak to explicitly take advantage of it.
 
RTX IO is nothing but NVIDIAs fancy name for their DirectStorage support and unless I've completely misunderstood something major, only new thing is the DirectStorage API from Microsoft. Even if RDNA1 missed the support AMD could map and access pretty much any media through HBCC and NVIDIA had their GPUDirect before Ampere too. Realtime decompression isn't anything new for GPUs, the fact that the data might originate from SSD instead of going through VRAM shouldn't affect the decompression part at all.

No, based on what has been revealed so far. Direct Storage is just an API to make the data transfers from the SSD to the final destination more efficient. As far as we know it has nothing to do with the decompression aspect which still has to be performed on the CPU unless you're using RTX-IO which uses a separate API to Direct Storage.

Again, this is based on all the currently available information. It may be that AMD simply aren't revealing their full hand yet.
 
Back
Top