AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

AMDs clocks are more "real" compared to Nvidias. 5700 XT boost is 1905Mhz and real clock median at gaming is 1890Mhz. The same for 9.7TFlops of 5700 XT.

Interesting times. Wellcome back AMDs competition in the highend, and hopefully Intel jumps too next year, better for all of us.

Regarding RT, Igorslab which is not AMD fanboy showed some results few days back, and RX 6800 was behind ampere yet better than 2080 Ti, so will it be enough for you and your game/settings? Maybe its slower with complex RT and faster with simpler ones. The same but on the other side goes for memory, 8GB or 10GB will be enough? And good to hear AMD is working on open source DLSS kind SSAA.

Perf/watt is better than ampere but not by much, less than 10% better for 6800XT vs 3080, 15-20% 6900XT vs 3090, and probably a match between 3070 and vanilla 6800 cause 6800 should be 10-15% faster. Curiosly, the highest one 6900XT will be the best of 3 in that metric, just the opposite of Nvidia where RTX3070 is the most efficient. Sure midrange models from both arks will be even better in a few months time when they launch. Hopefully RX 6700 XT is around 400 USD cause thats still my target with close to 3070 performance, 12GB and less than 200W...maybe 6800 is limited and mostly for filling that gap a few months and thats why its price doesn´t look so competitive.

Difficult to compare about efficiency when in 2 cases out of three AMD cards have more memory
 
I did not hear them say, but did they use AMD's latest 5 series CPUs in the card benchmarks? Since they are supposed performance gains when paired do I also have to buy a 5 series to reach the same benchmark scores?
 
Huh? AMD had all the same capabilities (and more) with HBCC already long before NVIDIA. And yes, they specifically mentioned RX 6000's support DirectStorage, too.

In what way does HBCC (which wasn't mentioned here at all, and isn't supported supported on Navi 1) equate to GPU accelerated decompression of the data stream from the SSD? These are two entirely different technologies.

And yes, the RX6000 supports Direct Storage, this was never in any doubt. However there has been no information about GPU based decompression which works alongside Direct Storage.
 
Difficult to compare about efficiency when in 2 cases out of three AMD cards have more memory
Thats right, but memory power is known too so we could calculate, and its clear both arks are not far from each other in perf/watt, maybe AMD a bit better if you want, but considering 7nM should have some 15% advantage just for the node, very close...Looks like this is the trend this gen with both parts offering different amount of video memory. Will be just weird to see RX 6700 XT with 192-bit and 12GB more than 3080s 10GB
 
Am I a troll for desiring a high-end GPU from AMD which can compete with Nvidia in all metrics? Am I a troll for not wanting to replace my 2080 Ti with a card that is faster on some fronts and slower on others?

I sold my 2080 Ti back in August when it became clear that the successor was finally right around the corner. I had hoped for a 20GB 3080 as 10GB VRAM is insufficient for my needs, but the lack of announcements of such a product and the recent rumor that these cards will never be released were enough to cause me to go out and buy another 2080 Ti to hold me over. So I'm back where I started, but plus a few hundred dollars due to profits from the sale of my original card.

Look, I know it's tempting to think that every person who bothers to opine on these matters is some kind of fanboy troll, but that's not always the case. Especially on this forum. This isn't Reddit, it's not a youtube comments section, it's not the wccftech comments section. There are smart, reasonable people here that make up the majority of posters on this forum.

We don't yet know the RT performance of the RX 6000 series, and as a potential customer for this card, that is enough to make me skeptical. Enough so that I will wait until 3rd party reviews, at least.

Your question contained a false premise. I challenged it by offering you a chance to think through the premise further. Your dismissal is unwarranted, should you wish to engage in honest discussion of the subject matter.
I know you though... that is why I dismissed your post. You indirectly asserted the original post which, in itself, is the false premise I was questioning.
Interesting, how you are skeptical NOW about RT performance but weren't when you purchased your RTX2080Ti? Or before you sold your 2080Ti.
 
The only thing I want to know is what ray tracing performance is like for various types of ray tracing effects - reflections, GI, shadows. etc...

I think the reason for the absence of RT benchmarks by AMD is down to the fact that most current implementations aren't optimized to use inline ray tracing feature which could explain some potential performance deficit ...
 
All the concern "trolling" about hypothetical RT performance is giving me diarrhea...
anyways... the only official figure is the following:

I wouldn't call it trolling if people like myself who are considering getting one of these vs Ampere want to know whether it's RT performance - which will likely make it's way into most games this generation thanks to the consoles, is going to be on par with the currently shown, very impressive rasterization performance. I for one would feel quite let down if I purchased a 6800XT now on the back of these benchmarks only to find later on that in games with RT enabled it performs more like a 3070 than a 3080. I think this is a very important question to have answered.

Are you sure about that Rage mode point? They showed numbers before introducing rage mode.

The 6900XT comparison vs the 3090 used Rage mode. The others didn't.

Performance difference between 3080 and 3090 is miniscule. 3090 is mainly for creatives who us blender type apps or machine learning requiring massive amount of memory. There is no good use case for 3090 on gaming side when considering tiny perf uplift and huge uplift in price.

It's about 15% at 4K according to TPU (and in some cases can certainly be more). So it does seem that the 11% uplift in resources from the 6800XT to the 6900XT will struggle to bridge that gap if the 6800XT = 3080.

https://www.techpowerup.com/review/msi-geforce-rtx-3090-gaming-x-trio/32.html

So AMD is assuming the maximum boost clock as the value where to calculate the memory bandwidth from, instead of game clock.. Hum..

Makes sense, they do the same on their official specs page:

https://www.amd.com/en/products/specifications/compare/graphics/10516,10521,10526

I did not hear them say, but did they use AMD's latest 5 series CPUs in the card benchmarks? Since they are supposed performance gains when paired do I also have to buy a 5 series to reach the same benchmark scores?

According to Toms, "Smart Memory Access" requires specific developer support. So it's curious why AMD are including it here for older games which presumably don't have it.

https://www.tomshardware.com/news/a...-with-ryzen-5000-cpus-via-smart-memory-access
 
Back
Top