AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

It's about 15% at 4K according to TPU (and in some cases can certainly be more). So it does seem that the 11% uplift in resources from the 6800XT to the 6900XT will struggle to bridge that gap if the 6800XT = 3080.

You have to factor in price. 3080fe is 699$, 3090 is 1499$. Double the price for 15% is not great value for gaming. 6900xt at 999$ is significantly cheaper than 3090. My belief is 3090 is for creators using blender, arnold, machine learning,... It's not sensible purchase for gaming.
 
I know you though... that is why I dismissed your post. You indirectly asserted the original post which, in itself, is the false premise I was questioning.
Interesting, how you are skeptical NOW about RT performance but weren't when you purchased your RTX2080Ti? Or before you sold your 2080Ti.

You know me? 95% of my posts were made over a decade ago. Whatever opinion you have formed about me is likely to be out of date.

The 2080 Ti was the fastest graphics card in any gaming workload at the time of its release, including RT. Do I desire more RT performance? Of course I do, that's why I want to be sure that the card I purchase to replace my 2080 Ti will in fact be faster in RT. You think to catch me in a logic trap but I am being completely consistent in my logic. Aside from raw RT performance Nvidia also has DLSS in their bag of tools. AMD has so far been quite vague about their DLSS competitor, certainly no performance metrics nor implementation details have been shared as of yet. If I'm going to spend my hard-earned cash on a luxury item such as a gaming graphics card, I'm not going to do so purely on speculation and future promises. I need hard evidence. I didn't pre-order a 2080 Ti, I waited until 3rd party reviews were out, then I made my purchase. I'm going to stick with that strategy again here.
 
size comparison 3090 vs 6900XT to the left and 3080 vs 6800XT to the right


This is encouraging. I had to purchase a vertical mounting bracket to install an air-cooled 2080 Ti in my PC 011 Dynamic case because the card I purchased was too large to fit in horizontal orientation thanks to the pump+reservoir combo I use in my water-cooling loop. A shorter card might allow me to return to horizontal configuration without feeling like I am cramming components into a space that isn't accommodating.

Little details like this are often overlooked in reviews, but have meaningful impact on users.
 
This is encouraging. I had to purchase a vertical mounting bracket to install an air-cooled 2080 Ti in my PC 011 Dynamic case because the card I purchased was too large to fit in horizontal orientation thanks to the pump+reservoir combo I use in my water-cooling loop. A shorter card might allow me to return to horizontal configuration without feeling like I am cramming components into a space that isn't accommodating.

Little details like this are often overlooked in reviews, but have meaningful impact on users.
sure! Thats what i said too, little details will decide for each ones case purchase decisión now, but this only happens when there is competition;) And these little things are sadly overlooked in most reviews.

Honestly, nvidias FE temp/dB are very good, so AMD would be fine with just getting close. Also, 7nM chip could be more difficult to cool than 8nM one even if it requires less power, so can´t wait for AIBs reviews for power/temp. This all power/temp/dB/size thing is most important for me too with a Mini-ITX case. Even same dB could be different to the ear...blower type coolers at same dBs are more ear friendly in my case for example. I´m fine with 2080 Ti ish perf though, so will most likely wait for RX 6700 for lower wattage and smaller.
 
Last edited:
Scott Herkelman said on stage that all the games shown in the slide with performance boosts from Smart Memory had no developer intervention.

Frank Azor basically said it works out of the box but you can expect bigger gains if game devs take advantage of the feature.
 
You have to factor in price. 3080fe is 699$, 3090 is 1499$. Double the price for 15% is not great value for gaming. 6900xt at 999$ is significantly cheaper than 3090. My belief is 3090 is for creators using blender, arnold, machine learning,... It's not sensible purchase for gaming.

I was just talking in pure performance terms. I don't disagree that the 3090 is poor value vs the 3080 and probably vs the 6900XT too. I'm just pointing out the the 3090 likely has a bigger performance advantage over the 3080 than the 6900XT does over the 6800XT. So however well the 6800XT does vs the 3080, the 6900XT is likely to do worse vs the 3090. Although AMD's slides didn't really show that thanks to their inclusion of Rage mode in the 3090 comparison.

37dB say the video shared by @Leoneazzurro5 , which doesnt seem bad, though I dont know your expectations in that regard.

Yeah that's not bad. The 3080 FE is 36db according to TPU but they would need to be tested using the same methodology for a proper comparison. The 3070 is seriously impressive though at only 30db, again according to TPU.
 
Did anyone talk about that USB-C connector? They didn't even mention HDMI 2.1, did they?

There's quite a bit of info AMD is still keeping to themselves.
 
"Smart Access Memory" looks like a CPU-GPU UMA implementation based off of the slides ?

It sounds like an extension on top of pinned memory and D3D12/Vulkan 256MB device local memory heap and that it's memory type is host visible and coherent ...

I guess they removed the 256MB allocation limit for that particular memory heap based off of their wording ...
 
Anything about chip size?. The rectangular shape is curiously similar to PS5's APU. If near 20% of efficiency comes from Infinity cache, that allows CUs to run so fast, i cant see how Sony has that clock without some Infinity cache.
 
Last edited:
"Smart Access Memory" looks like a CPU-GPU UMA implementation based off of the slides ?

It sounds like an extension on top of pinned memory and D3D12/Vulkan 256MB device local memory heap and that it's memory type is host visible and coherent ...

I guess they removed the 256MB allocation limit for that particular memory heap based off of their wording ...
Supposedly they were bound by PCIe BAR limitation. Perhaps now with Zen 3 platform, they now run Infinity Fabric over the PCIe x16 when an AMD GPU (with XGMI?) is detected, and maps the GPU local memory into the system physical address space through IF mechanisms instead. :???:
 
In what way does HBCC (which wasn't mentioned here at all, and isn't supported supported on Navi 1) equate to GPU accelerated decompression of the data stream from the SSD? These are two entirely different technologies.

And yes, the RX6000 supports Direct Storage, this was never in any doubt. However there has been no information about GPU based decompression which works alongside Direct Storage.
The "GPU accelerated decompression" is just shaders running on ALUs, not dedicated hardware or some such, and we've had GPU accelerated decompressors forever.
The point was that it's not "nvidia exclusive" or even anything NVIDIA did first, as your post seemed to suggest.
 
Back
Top