AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

;).

The question was about segmenting and bandwidth. Having the same bandwidth in a card with 80 CUs and 60 CUs it's weird. Also having the same amount of ram in a card of 1000 and 580 is also very weird...

If we talk about bandwitdh then yeah. But amount, no, somes games uses more than 8gb, even under 4k. Not just allocating...

And I guess in the end it's easier for amd to have one memory config to build ? And of course a nice 16gb sticker vs nVidia products
 
CapFrameX's 2080Ti got 470 FPS, maybe others changed the window size from default 1280*720, idk. Usually 2080Ti is twice as fast as 2060, so 470FPS seems to be correct compared to my 240FPS

Yes that makes a lot more sense. I figured it would be a case of different settings but I'm not familiar with the benchmark.

Thats also exactly in line with the previous leak. So 2080Ti level performance certainly isn't too shabby. As has been mentioned before, the synergy with the consoles may close a lot of that gap once games start to optimize RT for that architecture.
 
I never believed in the infinity cache, have to give to to AMD and that british youtuber who was so adamant that it exists. I wonder how this stuff affects potential GPGPU applications, but I guess there's CDNA for that so AMD won't have to do both at the same time in the same chip anymore. Hopefully it's more BW-constrained in pure compute tasks so the miners won't buy out all the cards (or at least there'll be some lag between mining algos running well on RDNA2 like it was with Vega which was more or less available at msrp till late october 2018)
 
I never believed in the infinity cache, have to give to to AMD and that british youtuber who was so adamant that it exists. I wonder how this stuff affects potential GPGPU applications, but I guess there's CDNA for that so AMD won't have to do both at the same time in the same chip anymore. Hopefully it's more BW-constrained in pure compute tasks so the miners won't buy out all the cards (or at least there'll be some lag between mining algos running well on RDNA2 like it was with Vega which was more or less available at msrp till late october 2018)

I didn't believe in it either, I kinda still don't. It just doesn't scale well in terms of resolution. What happens when dedicated next gen games come out, does the cache get thrashed to hell at 4k?

Anyway, I'm not worried about gpu performance. They have cdna, which is their vega derived arch, coming out this year too.
 
I didn't believe in it either, I kinda still don't. It just doesn't scale well in terms of resolution. What happens when dedicated next gen games come out, does the cache get thrashed to hell at 4k?

Anyway, I'm not worried about gpu performance. They have cdna, which is their vega derived arch, coming out this year too.

I do expect there to be some edge cases where the infinity cache falls apart as well. I suspect it'll work great most of the time, but there will be some situations where you can't fake your way out of having the real memory bandwidth. Heavy simultaneous RT + rasterization might be one of those cases. It'll be interesting to see how much if any manual tuning AMD will need to do on the driver side to avoid thrashing the cache in heavy RT games with large working sets.
 
Last edited:
Real question is whether the 6900xt can beat the 3080 in ray tracing. I have a feeling it can’t. Means for professional applications that deal with RT the $1000 price tag may not be a bargain.
 
I didn't believe in it either, I kinda still don't. It just doesn't scale well in terms of resolution. What happens when dedicated next gen games come out, does the cache get thrashed to hell at 4k?

Why would it? You aren't storing raw textures in there, you are storing data that might be needed again from the other caches/units and supposedly RT stuff.
If the 60-70% utilization out of the box is a realistic number and one of the consoles has something similar, we may see more devs than just the AMD partnered ones optimize for that other 20-30%.
 
Ou
Real question is whether the 6900xt can beat the 3080 in ray tracing. I have a feeling it can’t. Means for professional applications that deal with RT the $1000 price tag may not be a bargain.
It’s questionable if Ampere’s increased RT performance brings any value to actual games. The bottlenecks seem to be elsewhere.
 
What are the odds of RDNA2 mobile GPUs coming out next year?
All of the odds.
Infinity Cache, as originally mentioned by RGT when they leaked it for the first time, was developed as a way to significantly increase performance on mobile GPUs by decreasing VRAM channels (hence easier to scale down), and most of all to be able to increase gaming performance on the PC APUs that are stuck on standard DDRx or LPDDRx.
 
No, based on what has been revealed so far. Direct Storage is just an API to make the data transfers from the SSD to the final destination more efficient. As far as we know it has nothing to do with the decompression aspect which still has to be performed on the CPU unless you're using RTX-IO which uses a separate API to Direct Storage.

https://www.anandtech.com/show/1620...-starts-at-the-highend-coming-november-18th/2

Finally, AMD today is also confirming that they will offer support for Microsoft's DirecStorage API. Derived from tech going into the next-gen consoles, DirectStorage will allow game assets to be streamed directly from storage to GPUs, with the GPUs decompressing assets on their own. This bypasses the CPU, which under the current paradigm has to do the decompression and then send those decompressed assets to the GPU.
 
Very good. If we see desktop 6700 and 6600 in Q1/Q2 '21, there will likely be refreshed laptop designs later on in 2021 with Zen3 based mobile processors and RDNA2 mobile GPUs.
Back to school, aka early Q3 best case. Late Q3/early Q4 worst case.

All of the odds.
Infinity Cache, as originally mentioned by RGT when they leaked it for the first time, was developed as a way to significantly increase performance on mobile GPUs by decreasing VRAM channels (hence easier to scale down), and most of all to be able to increase gaming performance on the PC APUs that are stuck on standard DDRx or LPDDRx.
Interesting. I can only hope for RDNA2 Radeon Pro in laptops. (then again knowing Dell and Lelnovo they'll stick to Nvidia for their Precisions/Pseries)
 
Back
Top