AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

If you're okay with Navi 31 providing ~2080Ti RT performance in 2023 then no.
You skipped the "making it beefier" part, and apparently assume Navi31 is just rebranded or shrunk Navi21?
It's not holding back capabilities, it doesn't provide a low level h/w access. That's what APIs do generally. The extent to which this impact performance is highly unlikely to be above some single percent digits - or said capabilities would be included into DXR 1.1.
The hardware is capable of things the API doesn't provide, thus, the API is holding the hardware back. Whether that's performance wise, feature wise or whateverwise is irrelevant.
Not to mention MS said it outright that they allow Xbox devs direct access to hardware, which allows things they couldn't do with DXR.
 
We already know at least DXR is holding back it's current capabilities, not sure of Vulkan RT
Did not look in detail, but there seems no major difference between APIs.
To unleash 'capabilities', we would not need an API at all, that's the point.
But i'm uncertain about something else: We don'[t know about the execution model of AMDs traversal code, and if it is possible to mach its performance with regular compute shaders. The exe3cution model of Mesh Shaders is way beyond that, for example.
So, eventually, to truly unleash, we also need a reform of compute shaders. So far VK/DX12 do not even match up with what Mantle had. That's really my main concern about current APIs, much more important than RT details.
 
You skipped the "making it beefier" part, and apparently assume Navi31 is just rebranded or shrunk Navi21?
No, I've just extrapolated the RT performance of a Navi 31 by going with a Navi 21x2.
"Making it beefier" would actually change this for the better. Hence why it is needed unless you're willing to see the performance which I've described.

The hardware is capable of things the API doesn't provide
All h/w is capable of such things.

thus, the API is holding the hardware back
Sure but the amount of performance loss due to this is unlikely to be anything worth talking about.

Whether that's performance wise, feature wise or whateverwise is irrelevant.
It is relevant in the talk about performance.
Feature wise I'd expect future GPUs to have additional capabilities which would prompt MS to update the DXR to 1.2 or 2.0. It would also prompt the redesign of the RT h/w in all future GPUs. This however is irrelevant for RDNA2.

Not to mention MS said it outright that they allow Xbox devs direct access to hardware, which allows things they couldn't do with DXR.
Not to mention that so far there are zero indication of Xbox devs being able to magically extract a lot more performance from the h/w than what the same devs are doing on PC.
 
Not to mention that so far there are zero indication of Xbox devs being able to magically extract a lot more performance from the h/w than what the same devs are doing on PC.

Why is this even being brought up when we're still at the start of the new console generation? That's similar to like going back to 2013 or early 2014 and being told eg AC:Black Flag or CoD:Ghosts were fully representative of what the PS4 and Xbox One really were capable of.

The thing however that makes me less reluctant to believe in console optimization magic is that during the last-gen, we saw a HD 7870 typically being enough to slightly outedge the PS4 version. Nvidia GPUs of the same era aged horribly though, which I admit could be because of the drivers, but that's no comfort for the users.
Doom Eternal is a known case of worst performing Nvidia title on older GPUs, and it never improved, it ran horrible for me when I tried it on an i7 4770k and GTX Titan last week, couldn't keep 30 FPS despite putting all on minimum and the resolution scale to 50%. Whereas my laptop with the 2500U and 1050 Ti 2GB could get close to stable 60 FPS despite even officially having too low VRAM.

I wouldn't be surprised if Nvidia's advantage in DXR turns out like its advantage in tessellation did this generation. Many games simply started limiting the tessellation to what was performant on the consoles, so Nvidia's tessellation advantage went to waste in many late-gen titles.
 
Why is this even being brought up when we're still at the start of the new console generation? That's similar like going back to 2013 or early 2014 and being told eg AC:Black Flag or CoD:Ghosts were fully representative of what the PS4 and Xbox One really were capable of.
It's not similar since we're talking about just one particular thing here - RT. If consoles with RDNA2 would be able to run RT a lot better than PCs with 6000 Radeons due to API limitations then we'd already be seeing this in several multiplatform titles. But so far no such examples exist.

The thing however that makes me less reluctant to believe in console optimization magic is that during the last-gen, we saw a HD 7870 typically being enough to slightly outedge the PS4 version. Nvidia GPUs of the same era aged horribly though, which I admit could be because of the drivers, but that's no comfort
It's because of the hardware, and to extent because of the lack of s/w optimization for said h/w on the developers part.

I wouldn't be surprised if Nvidia's advantage in DXR turns out like its advantage in tessellation did this generation. Many games simply started limiting the tessellation to what was performant on the consoles, so Nvidia's tessellation advantage went to waste in many late-gen titles.
Did it though? Or did GCN's handing of tessellation get to a point where Nvidia didn't really have an advantage there anymore?
 
It's not similar since we're talking about just one particular thing here - RT. If consoles with RDNA2 would be able to run RT a lot better than PCs with 6000 Radeons due to API limitations then we'd already be seeing this in several multiplatform titles. But so far no such examples exist.

That's assuming studios actually are knee deep into optimizing for the consoles and not mainly just being quick PC ports with few low level optimizations in place. You know optimization costs money and even then, optimization takes time, and all consoles have low userbases early gen.
AFAIK, there's still no PC version that requires higher than DX12 feature level 11_1 (at least Dirt 5 refuses to run on the original Titan, which is limited to 11_0). I presume that means no games fully take advantage of the new feature sets neither on AMD nor Nvidia.

It even became known last gen that the Xbox One did ship with what was eventually the same DX11 as on PC. That wasn't public information already at launch and it didn't come to the public directly from Microsoft

Did it though? Or did GCN's handing of tessellation get to a point where Nvidia didn't really have an advantage there any?



Why would that be so different from if titles start prioritizing having performant RT on RDNA2? IIRC, titles back in 2013 and 2014 typically had higher tessellation settings on PC than on consoles, and Kepler cards had a visible advantage compared to GCN cards. Then, as the generation went on, at some point, tessellation just stopped being a big deal in most comparisons, both for console vs PC and Nvidia vs AMD.
 
That's assuming studios actually are knee deep into optimizing for the consoles and not mainly just being quick PC ports with few low level optimizations in place. You know optimization costs money and even then, optimization takes time, and all consoles have low userbases early gen.
The amount of optimizations which devs are putting into the console versions of their games are usually some magnitude of times higher than what they do for PC.
I don't see why this would be any different for new consoles, early generation or not.
It can be seen clearly that devs are spending a lot of time optimizing their RT performance on new consoles already btw, from consoles to PC comparisons and the changes which some games get with patches.
RT as any new thing gets a lot of attention and interest from devs at the moment.

AFAIK, there's still no PC version that requires higher than DX12 feature level 11_1 (at least Dirt 5 refuses to run on the original Titan, which is limited to 11_0).
There are a couple of titles requiring FL12_0 already I think but why does it matter?

Why would that be so different from if titles start prioritizing having performant RT on RDNA2? IIRC, titles back in 2013 and 2014 typically had higher tessellation settings on PC than on consoles, and Kepler cards had a visible advantage compared to GCN cards. Then, as the generation went on, at some point, tessellation just stopped being a big deal in most comparisons, both for console vs PC and Nvidia vs AMD.
Again, why do you think that it's because of "games simply started limiting the tessellation to what was performant on the consoles" and not because AMD h/w got better at tessellation to a point where NV did not have a performance advantage due to it? This new AMD h/w was also used for mid-gen console upgrades which in turn lead to versions of games for these "pro" consoles running with higher amounts and levels of tessellation than the "base" consoles versions leading to less visual difference between them and PC too.
 
The amount of optimizations which devs are putting into the console versions of their games are usually some magnitude of times higher than what they do for PC.
I don't see why this would be any different for new consoles, early generation or not.
I could imagine there's a move from 'maxing out' to 'being able to scale down'. Many reasons here: Series S, Switch, chip shortage, economy might put Moores Law on hold before physics do. Maximize profit to compensate insane production costs.
 
The amount of optimizations which devs are putting into the console versions of their games are usually some magnitude of times higher than what they do for PC.
I don't see why this would be any different for new consoles, early generation or not.
It can be seen clearly that devs are spending a lot of time optimizing their RT performance on new consoles already btw, from consoles to PC comparisons and the changes which some games get with patches.
RT as any new thing gets a lot of attention and interest from devs at the moment.

You say that optimization on consoles usually are some magnitude of times higher than what they do for PC, yet the early gen games have never been remembered as showing what the consoles were capable of.
Rather, early gen titles have often been picked on for having visible compromises for using engines that were made for the previous generation in mind.
Why are today's consoles' early-gen titles suddenly different?

And since you say studios still are getting to grasps with using DXR in the right way even on PC, and patching for consoles too, why do you assume that consoles are using well optimized low level implementations?


There are a couple of titles requiring FL12_0 already I think but why does it matter?

It's just that you seem to assume that the consoles are close to having peak optimization from day one, when no game yet is known for requiring DX12 Ultimate, and actually even Nvidia GPUs arguably are hampered by games still being made to run on fl 12_0.
Even the go to cases for showcasing PC versions at their best have been limited by having to take older hardware into account.

Again, why do you think that it's because of "games simply started limiting the tessellation to what was performant on the consoles" and not because AMD h/w got better at tessellation to a point where NV did not have a performance advantage due to it? This new AMD h/w was also used for mid-gen console upgrades which in turn lead to versions of games for these "pro" consoles running with higher amounts and levels of tessellation than the "base" consoles versions leading to less visual difference between them and PC too.

I'm not talking of AMD releasing new hardware that was better at tessellation. I'm talking of early gen games that had lower tessellation settings on both consoles and AMD GPUs than on Nvidia GPUs. And how tessellation settings stopped being a deal in comparisons for later games, on the same consoles and AMD and Nvidia GPUs.

But if I misunderstood your argument and you mean it's improvements on the game engine and/or driver side, if it took two or three years to optimize tessellation for GCN to remove Nvidia's visible advantage, why do you assume optimizing RT on RDNA2 would be done in just a few months?
 
Why are today's consoles' early-gen titles suddenly different?
Because if it was a simple lack of API exposition of some feature which would make RT a lot faster on RDNA2 then this feature would already be used on consoles where we're seeing RT being ran at 1/8th of the resolution of what PC is able to do.
It's not rocket science. Later games will find new ways of using RT which may lead to bigger visual gains but not necessarily ways for it to run faster. This is more or less what is happening with all console generations.

It's just that you seem to assume that the consoles are close to having peak optimization from day one
No, I know that consoles are getting a lot more dev resources for optimizations. This doesn't mean what you're saying at all. But it does mean that if there's a low hanging fruit on a console which would bear a huge performance gain - it would be used straight away, in early titles or whatever.

I'm not talking of AMD releasing new hardware that was better at tessellation. I'm talking of early gen games that had lower tessellation settings on both consoles and AMD GPUs than on Nvidia GPUs. And how tessellation settings stopped being a deal in comparisons for later games, on the same consoles and AMD and Nvidia GPUs.
Take a "later game" which use tessellation extensively and run it on, say, 280X vs 970. The results will be the same as with "early gen games". The reason why "tessellation settings stopped being a deal in comparisons for later games" is because AMD has more or less fixed their tessellation performance in GCN3+ - or at least made it fast enough for the cards to be limited by other parts of the pipeline - not because later gen games are using less tessellation than the earlier ones.
 
Nah, title specifies certain AMD gpu range, not 'ps5 gpu'. There isnt even a gpu mentioned in that performance range to start with (5700xt), nor does the ps5 share the same feature set as the named gpus.

Theres still a console section i think where ps5 gpu can be discussed.
 
TechPowerUP has the first picture of a RX 6700.


vGDaewR.jpg


6GB VRAM.
On one hand it seems they're keeping the 192bit bus which is good, but on the other hand it's only 6GB of VRAM which makes question whether they should be marketing this as a 1440p GPU. The RTX 3060 should be within the same performance bracket as the RX 6700 but it has 12GB. This seems like a big compromise for performance in the long-term. Perhaps we'll have both options (6 and 12GB) down the road.

I get that going down to 128bit for 8GB could hurt performance too much, but I wonder if it wouldn't have been better to find a compromise in the middle, e.g. using a 160bit bus for 10GB total VRAM.
As it stands though, I don't think I'd recommend buying this card for 1440p monitors.


Because it's a rdna 2 thread too ?
Of course, constantly pushing for derailing the discussion into "Ampere has best RT" and "DLSS is king" is fair play. However, bringing up actual RDNA2 GPUs in the RDNA2 thread that will probably sell >10x more than Ampere and are therefore likely become the focus for multi-platform developers in the medium-long term should be considered cheating.
 
Consoles were mentioned in the discussion of RT API deficiencies on RDNA2 on PC, I see nothing wrong with that.

On one hand it seems they're keeping the 192bit bus which is good, but on the other hand it's only 6GB of VRAM which makes question whether they should be marketing this as a 1440p GPU.
6GB is enough for 1440p unless you're willing to use RT - which you likely won't on a <6700XT class card anyway.
 
I'm very curious about they will fix RT with RDNA3. More units, or redesign the whole thing. A little OT but I'm curious about the Intel solution too.

Anyway, we can't buy anything for now...

Well if RDNA 3 will be 5nm they will have almost doable peak density so they will have a lot of extra transistors to work with.
 
Well if RDNA 3 will be 5nm they will have almost doable peak density so they will have a lot of extra transistors to work with.

Sure.

My idea was more like, just quadrupling or whatever the RT units, or, going with another architecture, or more "competent" units, etc. Of course it's not black or white, it will probably be a little bit of everything.
 
My idea was more like, just quadrupling or whatever the RT units, or, going with another architecture, or more "competent" units, etc. Of course it's not black or white, it will probably be a little bit of everything.

Or AMD will successfully fight back on the "better RT performance is the best" narrative and promote a more cautious/effective approach to RT overall instead of fighting Nvidia's number-of-rays-per-second war.

For example, back in 2012-2014 geometry performance was super important and every review would make some claim about tessellation settings in games like Metro (and culminating with Witcher 3's subpixel triangles on Geralt's hair).
Eventually the optimization for the GCN consoles took over and Kepler's geometry performance advantage over GCN was deemed irrelevant over the years, especially with the rise of async compute in game engines.


It could be that AMD will try to catch up with nvidia's RT advantage with RDNA3 and later architectures, but it may be a better strategy to rely on the 9th-gen consoles to change the game engines' and devs' RT targets instead. Playing into their competitor's game is usually not the best decision, because that way AMD is bound to always be one step behind.
 
Tessellation is not the same as RT, Tessellation was deemed not so important by developers because of the way it complicated artworks and design, tessellated characters and objects made animation, collision detection and textures much harder. This limited the use of Tessellation to terrain, water, snow, mud and some fixed inanimate objects. In the end the geometrical complexity from tessellation remained very limited.

RT is NOT the same, it handles reflections, shadows, lighting, among other effects, each with it's complex visuals and techniques. It gives developers the ability to bring unique visuals to the games. If you want to increase quality or use RT in multiple effects you will always have to increase the number of rays, whether per a single effect or per multiple effects. If you want reflections to be more accurate and real time, you will want to increase resolution and LOD, which means you will require more rays, you will also need them if you want your game to have reflections + shadows or GI.

Equating RT to tessellation is an amateur generalization and not a very educated point of view at all.
 
Back
Top