AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

So we expect consoles at about half the performance to run 4k RT at 30-60fps but these new cards are unable to?

Spiderman on ps5 for example does ray tracing in quarter the screen resolution. It's a matter of quality in the end. If you look at the current crop of rt pc games they are heavy to the point of requiring dlss to run well. Well in pc meaning north of 60fps at 4k.
 
https://www.tomshardware.com/news/amd-rx-6000-rdna-2-big-navi-gpus-revealed

AMD RX 6000 Key Points

  • Up to 5120 shading cores for the RX 6900 XT, 3840 cores in RX 6800
  • 2.1-2.25 GHz clocks
  • 16GB of GDDR6 for the three cards revealed today
  • 128MB Infinity Cache that doubles effective bandwidth
  • A Ray Accelerator for each CU, 10x Acceleration for RT Calculations
  • Prices $999, $649 and $579 for 6900 XT, 6800 XT and 6800
  • Upcoming FidelityFX improvements to offer a DLSS alternative
 
So we expect consoles at about half the performance to run 4k RT at 30-60fps but these new cards are unable to?

Next-generation consoles aren't going to be using full RT, but selective forms of it. PC GPUs on the other hand, will have to support full RT because Nvidia hardware is very much capable of supporting full RT. AMD GPUs we'll have to wait and see if they're capable of full RT at reasonable frame-rates.
 
I have yet to hear of a concrete 4k60 ray-traced game announcement for next-gen console. Have you?
I asked a question that required a yes or no answer. Thanks.

Next-generation consoles aren't going to be using full RT, but selective forms of it. PC GPUs on the other hand, will have to support full RT because Nvidia hardware is very much capable of supporting full RT. AMD GPUs we'll have to wait and see if they're capable of full RT at reasonable frame-rates.
What current or announced game is fully rendered using RT?
 
6900XT used both rage mode and direct access
6800XT graphs used none of those
6800 graph used only direct access

About ray tracing we already knew, and it would have not been anyway a fair comparison because no game out there has optimizations for RDNA2 and they are all RTX sauce
Yea, my bad. The RX 6800 XT was benchmarked "vanilla" for some reason... consistency anyone?
1) I believe they clearly marked the benchmarks using the OC and other features and then clearly show how much the other featuers are adding to it
Yes, they marked one bemchmark using the memory thing, the another not using that and the third one using the memory thing and being OCed. What a marketing
 
"Based on Zen L3 cache"
Zen's L3 was the closest example I could think of AMD's large SRAM implementations prior to the announcement, so this makes sense.
It would probably be structured differently, since each Zen L3 could supply 32 bytes per cycle to 4 cores at ~4GHz, per 16 MB cache.
The delivered bandwidth numbers would be massively higher if that level of bandwidth were maintained, and so would the power and area costs.

The more recent code changes hinted at something changing in the hierarchy, although I was hesitant to commit to the full 128MB because of how flexible caching can turn out to be.
Perhaps 8 or 16 blocks, one per memory controller?
Not sure how AMD's getting the delivered bandwidth figures to know the bandwidth figures of the cache and whether AMD's adding the external bus separately.
At the claimed bandwidth levels, my earlier question about the role of the L2 comes into play. That bandwidth is unusually close to the bandwidth offered by an unmodified L2.


The 128MB cache array was quite well spread along the die edges, judging from the die-shot mockup.

It looks like there's a band of infinity fabric going around the GPU core region, akin to what was diagramed for Vega 7. It would make sense given the name that the cache is tied to the fabric. If it's memory-side, this may help enable the direct memory access functionality with AMD's CPUs.
I'm curious whether it's gone to a longer cache line architecture to help with area and power.

Did AMD have two different die shot artists?
https://images.anandtech.com/doci/16201/88143984.jpg
https://images.anandtech.com/doci/16201/88215796.jpg
 
So I was going to throw this in the speculation thread when I thought of it a couple days ago but now it is locked.

It seems like the Infinity Cache usage is going to be a very efficient way to scale down on new nodes. Keeping power and performance in a compact size for mobile?
I don't expect them to do a simple shrink for RDNA 3 next year but seems like scaling down to 5nm will allow for additional focus to other power/performance benefits.
 
I asked a question that required a yes or no answer. Thanks.


What current or announced game is fully rendered using RT?

Minecraft, quake2 rtx.

But if you look at the hybrid games with partial ray tracing the effect is expensive. Control is probably one of the better and heavier games utilizing ray tracing. Below graph from metro exodus.
upload_2020-10-28_10-16-15.png

https://www.pcworld.com/article/357...080-founders-edition-review.html?page=6#toc-5

Control
upload_2020-10-28_10-20-16.png

https://www.techspot.com/article/21...t=The 3080 sees a 36,the Turing-based 2080 Ti.
 
All the concern "trolling" about hypothetical RT performance is giving me diarrhea...
anyways... the only official figure is the following:

  1. Measured by AMD engineering labs 8/17/2020 on an AMD RDNA 2 based graphics card, using the Procedural Geometry sample application from Microsoft’s DXR SDK, the AMD RDNA 2 based graphics card gets up to 13.8x speedup (471 FPS) using HW based raytracing vs using the Software DXR fallback layer (34 FPS) at the same clocks. Performance may vary. RX-571
 
I'm disappointed they didn't show anything comparable to RTX IO and GPU based decompression. That suggests it will be an Nvidia exclusive feature and thus not widely adopted.

Performance looks great on the face of it, especially at the price points and with those memory amounts. However the lack of RT benchmarks and the reliance in some cases on "Smart Access Memory" (which very few will have access to in the beginning) and Rage mode in hand picked game comparisons is a little worrying. My guess is that without these (and in RT titles) the 6800XT and 6900XT will be a little slower than the 3080 and 3090 respectively. Although the 6800XT still has a significant memory advantage.

Also I'm not buying the "we're working on ML super resolution" claim. Or rather, I'm sure they are, but I don't expect it to be released any time soon or be on par with DLSS when it is. If it works on consoles too though as suggested above it should be much more widely adopted which may make it more worthwhile overall - whenever we eventually get it that is.

For me the most exciting part was the 6800. It looks like a clear win against the 3070, possibly even in RT. AMD seems to think so based on the price point.

Oh and the Infinity Cache seems to be pretty awesome! Very impressed with that.
 
Back
Top