AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

So in actuality, RDNA2 is scaling equal here to Ampere (and better with minimums) except at the high setting where it falls behind. And even at the high setting, it's not THAT much worse
Yeah, for a relatively weak RT effect like simple shadows in a simple game with graphics from 15 years ago, and even with these odds, AMD RT hardware is still worse, same situation as is in Dirt 5. See Digital Foundry's analysis for a more complete picture with various games.



nVidia will definitely push to crank RT higher to put their cards in the best light, even when not necessary. I'm getting GameWorks/Tessellation vibes.
Nope, just very weak RT hardware compared to the competition.
 
WoW has had several engine and art/content upgrades over the many years and has quite dense polygon counts and higher resolution textures compared to what it had 15 years ago. It also has much better particle systems and lighting, DX12 upgrades and has been a multi threaded engine for a few years now.

It's art style betrays a decent underlying technology that can render a lot of detailed characters and effects at once.

It's not Star Citizen but it's hardly a 2 decade old game either.
 
WoW has had several engine and art/content upgrades over the many years and has quite dense polygon counts and higher resolution textures compared to what it had 15 years ago. It also has much better particle systems and lighting, DX12 upgrades and has been a multi threaded engine for a few years now.

It's art style betrays a decent underlying technology that can render a lot of detailed characters and effects at once.

It's not Star Citizen but it's hardly a 2 decade old game either.
Sure but do we know what is the main limiting point of this renderer?

As an example, if a game is 100% depth fill limited all the time then any amounts of compute (RT) added up to a certain point will be "free". Does this tell us anything noteworthy about the RT h/w in such cases? We already know that modern GPUs can run compute asynchronously.
 
So what you're saying is that 8GB and 10GB of VRAM will in fact be enough to run games from new console gen just fine? I'm confused.
Yes. You clearly are.
Unfortunately, I can't write what I already did in simpler terms, so we'll just have to settle with what's out there.


Let's simplify it then: you're expecting 3060 to perform better than 3060Ti in "higher resolutions" "in the long run"?
What are "higher resolutions" and how do these two cards perform there right now?
What is "the long run" and when will this happen?
I'm charging $250 for each of my crystal ball sessions. I accept Paypal or Bitcoin and I'll tell you only after my payment has been confirmed.

¯\_(ツ)_/¯
 
WoW did have graphical improvements since 2004 yes, still, the game doesnt really live up to todays (or even last generations) standards. Its hardly a good benchmark for todays hardware i think.

I'm charging $250 for each of my crystal ball sessions. I accept Paypal or Bitcoin and I'll tell you only after my payment has been confirmed.

¯\_(ツ)_/¯

And thats totally not offtopic.
 
Unfortunately, I can't write what I already did in simpler terms, so we'll just have to settle with what's out there.
The problem is that you didn't write anything which can be quantified later.
"8GB is not enough but it is enough and 10 isn't enough but it's just 20% less than 12 and thus it's enough. But in the long run it won't be enough."
Here's something to think about: 12 and 16 GBs of VRAM won't be enough "in the long run" either.

I'm charging $250 for each of my crystal ball sessions. I accept Paypal or Bitcoin and I'll tell you only after my payment has been confirmed.
In other words - you don't know and is just pulling stuff out of various places, just as I've said.
 
The problem is that you didn't write anything which can be quantified later.
I did. It just left you confused like you wrote in your previous post.
I'm sorry that you can't get it.

In other words - you don't know and is just pulling stuff out of various places, just as I've said.
Every theory about non-confirmed future happenings is "pulling stuff out or various places", including your theory that your beloved almighty RTX cards will always be better than any wretched Radeon card.
My theory is just better founded.
 
I did. It just left you confused like you wrote in your previous post.
I'm sorry that you can't get it.
No, you didn't. I've asked you as plainly as possible on what your actual expectations are - and you've started talking about crystal balls.

Every theory about non-confirmed future happenings is "pulling stuff out or various places", including your theory that your beloved almighty RTX cards will always be better than any wretched Radeon card.
My theory is just better founded.
Your theory which utterly and completely ignore all the facts of this console transition is better founded than what exactly?
Let me repeat: I'm absolutely sure that an 8GB VRAM GPU like 3060Ti or 3070 will be able to handle 99% of new gen console games at console level setting with console performance for the majority of this new console generation - so about until 2025 or so.
This remaining 1% is reserved for titles which will just dump console code onto PC with about zero effort given to optimize it for PC h/w and architecture. And that's what is called "bad porting", not "games needing more than 8GB of VRAM".
And here's the catch: I'm fully expecting such titles to perform relatively badly even on a 24GB 3090, simply because "bad ports" do that regardless of how much VRAM there is on a GPU, cause they are simply not optimized to run properly on non-console h/w.
Let's see how this will play out?
 
What's baffling about those who defend 16GB of RAM and their uncertain future impact so heavily -all for the sake of future proofing- is their complete disregard of of the baffling performance difference in the current DXU implementation (DXR) between vendors, a difference that is happening right NOW, with a palpable effect on the current roaster of games, as opposed to some imaginary unproven scenarios. Grasping at straws at it's finest. And this is without even mentioning AI upscaling.

Even worse is the absence of such future proofing claims when RDNA1 came with zero compatibility with DXU, when Turing was ready from the get go! So much for future proofing! They would rather turn off Ray Tracing, Mesh Shading, AI upscaling and play with vastly reduced image quality and trash performance, but rest easy inside knowing they have 16GB of RAM which can enable them to do what exactly? I mean really .. do what exactly? crank textures to some imaginary beyond the grave Ultra setting!
 
I'm actually hoping that AMD will release a 6800 8GB model down the road - which seems like a fairly possible scenario considering that they are likely to have a gap of $200 between a 6700XT and 6800. This one would allow us to compare not only the NV side of VRAM impact but AMD side as well and would clear up the data from cases where RDNA2 GPUs will gain performance due to the code not being properly optimized for anything but them instead of VRAM factor coming into play.
 
It's art style betrays a decent underlying technology that can render a lot of detailed characters and effects at once.

It's not Star Citizen but it's hardly a 2 decade old game either.
It's still vastly outmatched by any DXR game, sugar coating an old engine can only get you so far.
 
So I dug up some of @Bondrewd 's posts from before he was banned, and it seems the guy/gal was the real deal on what to expect. It's a shame his short-messaging posting style got him banned, but I think there are really good insights in his posts (even if a bit cryptic at times) for what to expect within the following months:


https://forum.beyond3d.com/posts/2151554/
https://forum.beyond3d.com/posts/2151556/
https://forum.beyond3d.com/posts/2151526/ <- the most relevant one, answering to @trinibwoy
https://forum.beyond3d.com/posts/2151485/
https://forum.beyond3d.com/posts/2152320/
https://forum.beyond3d.com/posts/2152343/


To summarize (and decrypt/interpret):

1 - AMD knew they'd get a power efficiency win with Navi 21 vs. GA102 (funny how no one talks power efficiency on top-end GPUs anymore).

2 - However, they're not very interested in making/selling a lot of Navi 21 chips for more than necessary to showing-off a halo product, because

3 - They get higher margins and more consolidated clients in selling CPUs and APUs, therefore these are the ones getting most of their 7nm waffer allocation.

4 - Also, RDNA2 is mostly an architecture optimized for mobile, so AMD will be pushing for Cezanne + Navi 22/23 combos in laptops, in a significant ramp up in design wins for laptops (which was recently confirmed by Lisa Su at CES BTW)

5 - OTOH, Huawei / HiSilicon leaving the SoC business in Q4'20/Q1'21 means there should be some 7nm production capability free for AMD.


So it looks like Navi 21 cards may not ever have a decent ramp-up in production. AMD's target for 2021 is CPUs and APUs for the desktop, and APU or APU+N22/N23 on mobile. Desktop Navi 21 and Navi 22 may be in short supply the whole year.

Cezanne + Navi 22/23 being their main target for laptops would also explain why AMD didn't bother all that much with upgrading its now-ancient Vega 8, other than perhaps increasing its clocks even further.


Nvidia we love you! Nvidiaaaaaaaaaaaa my RTX savior, what would we do without you! I'm just so happy right now *sniff* I need a tissue
Isn't it impressive how every page in the RX6800 thread needs to be flooded with this week's "Raytracing Performance Comparison" video/article? The ones that media outlets are totally not pressured to release under fear of being blacklisted out of FE samples (and god knows what else), like we saw happening with Hardware Unboxed?

And now the narrative is also "I think 8GB is plenty* for at least 5 years" because of course no AMD video card could have any perceptible long-term advantage on their side, ever. And of course they need to pollute make that point over and over again in the RX6800 thread.

* - most probably the same users who were claiming the Fury X was doomed at launch because it released with only 4GB VRAM against a 6GB 980 Ti.
 
- AMD knew they'd get a power efficiency win with Navi 21 vs. GA102 (funny how no one talks power efficiency on top-end GPUs anymore).
But they didn't.
And neither did they beat NV in RT performance which is what they were claiming to happen.

Isn't it impressive how every page in the RX6800 thread needs to be flooded with this week's "Raytracing Performance Comparison" video/article?
It is funny because it's actually you guys who are constantly mentioning RT and RTX while praising the useless VRAM amounts of 6800 cards. In a context of "so what if it's useless, so is the RT advantage!" And as someone above has said already that's factually incorrect right now, even on RDNA2 itself. But alas gotta push the narrative, obviously, somehow.
 
And as someone above has said already that's factually incorrect right now, even on RDNA2 itself. But alas gotta push the narrative, obviously, somehow.

Actually, when not comparing to ampere, RDNA2 pc gpus are not doing bad at all. For pure rasterization at lower resolutions they are perfectly fine i think, they compete in rasterization, though that might go in amperes favour down the road in more compute heavy engines like UE5.

I see the PS5 gpu being mentioned by him alot. Its not AMD pc gpus he's building 'cases' for. While he's on it anyway, yes, that actual gpu is, sitting at around 10TF worth of RDNA2, no infinity cache to help the bandwith, the ray tracing performance of rdna2 coupled with lower overall GPU power and no reconstruction hardware tech to match anything near DLSS, and the fact that a 3060/3060Ti, the current lowest NV entry matching/outmatching PS5's gpu before RT, i can see what the real problem is.

Outside of that, looking at the 6800/XT, with double the capabilities of the PS5 in rendering, and better-specced for ray tracing due to that, the inclusion of infinity cache (mitigating the higher resolution/reconstruction part to an extend), would say the RDNA2 pc gpus from AMD, when not comparing directly to NV gpus are doing fine and could totally be an option for anyone intrested in upgrading or building new, even for RT which these higher end GPUs will do.

Ive been watching Godfall with ray tracing, and maxed out it looks really nice. Its RT isnt the best but its there and it does what one exepcts for what it is. A sizeable next gen feature upgrade over its PS5 version which lacks RT and doesnt run everything maxed to Epic (and beyond).
 
Was that not only the case in DX11?
I think it depends on the specific game / engine - there's no such bottleneck in SoTTR, for example, but it seems to be present in DXMD, RDR2 and some other games that inexplicably run worse on Radeon GPUs in CPU limited scenarios.

In any case, I believe dx11 is a thing of past now, judging from the recent releases and some kind of progress in terms of proper utilisation of dx12/vulkan (instead of doing the things like some people did with adding OpenMP to their projects and claiming that they "support" "multicore" via it, if we take an example from some other sphere of IT)
 
I don't get why people think that 8Gb on a 3070 is a good idea from a practical or just even marketing point of view. It's already not enough to run some games (like Doom Eternal and other idTech shooters) at 4K even without RTRT at highest possible settings which would probably be occuring much more often in the future if Jensen releases 20Gb cut-down GA102 card (it seems to be in a Schroedinger state - each time I read a news piece about it, it's either dead or alive with no discernible pattern of which state it would eventually turn out).

Of course, you can lower down the resolution, decrease texture quality and do other things that will allow you to mitigate this issue, but N21 gets better with lower resolutions and if you don't go for the 4K eye candy (I certainly don't as I own a 1080p 280hz monitor), you won't use RTRT and other features that tank FPS to the ground.
 
New I don't get why people think that 8Gb on a 3070 is a good idea from a practical or just even marketing point of view. It's already not enough to run some games (like Doom Eternal and other idTech shooters) at 4K even without RTRT at highest possible settings
Because "people" look at actual data instead of spewing such nonsense?

average-fps-3840-2160.png
 
Back
Top