AMD FidelityFX on Consoles

Navi 14 has 13 skus associated with it with 7 being professional cards.

What would motivate AMD more? Including capabilities in low-end consumer cards specifically for gaming use that its mid and high range lack. Or adding the capability to low-end professional cards where deep learning is more readily utilize with the side effect that some low-end consumer cards have the hardware because it cheaper to repurpose than redesign?
If you're referring to TPU's list then there's a bunch of made-up SKUs over there. AFAIK there's no laptop Radeon Pro W, nor a Pro 5300. There is one desktop Radeon Pro W5500 with Navi 14, but then again so is there a W5700 with a Navi 10 without mixed precision dot product.

I think it's a bit far-fetched to claim Navi 14 is a low-end GPU for ML inference after all.
Occam's Razor says Navi 14 is simply AMD's lowest-end discrete GPU that uses an architecture made for gaming, which is how AMD is marketing it.


There is no proof that PS5 has them either. The burden of proof shouldn't require any of us to prove a negative.
You can twist the "burden of proof" fallacy to use it both ways (which some here are doing). I can't ask anyone to "prove the feature isn't there", nor can you ask me to prove "the feature isn't absent".

Sony didn't make a 50-slide detailed presentation over what is and what isn't present in the iGPU. Cerny made the Road to PS5 presentation where he went pretty high-level on just a handful of features he thought to be more relevant (as well as its unequivocal strengths like the I/O complex) and that's all there is.
IMO it's wrong to assume with 100% certainty that a RDNA feature that is present in all RDNA2 and 2/3rds of RDNA1 GPUs is absent just because Sony didn't mention it.
Maybe it's absent, maybe it's not.


Should we assume or have assumed (without any proof) that PS5 had VRS, Mesh shaders and/or Infinity Cache because Sony made no declaration that the PS5 doesn't sport those features?
The PS5 definitely doesn't use DX12 Ultimate's VRS, nor DX12's Mesh Shaders, and Infinity Cache isn't there from looking at the X-rays from Fritz.
From Matt Hargett's comments, it seems Sony didn't see much of a value in DX12U's VRS, but it remains to be seen if they're just not making use of the functionality in their SDKs, or there's really not the same hardware capability as all the other RDNA2 GPUs. Maybe it is absent and they're doing it through software, or maybe they're implementing foveated rendering through hardware in a different way.
As for Mesh shaders, being a RDNA2 GPU I see no reason why the PS5 would somehow have an inferior form of implementing the shorter geometry pipeline that mesh shaders allow. Perhaps their primitive shaders are a fork of Vega's primitive shaders tuned to Sony's demands, or perhaps Sony is simply calling Primitive Shaders to what DX12 calls Mesh Shaders.



AMD RDNA white paper describes int4 and int8 being part of variants of the RDNA CU. That implies there are RDNA CUs with no mixed-precision dot product functionality. There is enough hardware variation between RDNA based processors that you can't readily assume all the hardware is present but is simply turned off or broken.
I'm not readily assuming anything, just providing an opinion. I've been claiming no certainties whatsoever, other that we can't be certain of a number of things being taken for granted by some.

Regarding what you claim about "enough hardware variation", all I see is the very first consumer available RDNA GPU missing that feature and all the RDNA1+2 GPUs that succeeded it being confirmed to having that feature.
Where you see "it has enough hardware variation", I see "it missed the first GPU out of a family of 7 RDNA GPUs" whose INT4/8 specs are publicly available.




Or there's the idea that a performance delta doesn't have to be constantly posted when it's known and doesn't advance the discussion. The only reason for those here to constantly do so is some childish fanaticism with a company, or a financial benefit in doing so.
There's a very good reason why I didn't dare posting that "moot point" quote from Scott Herkelman in the PC GPU subs. Half the thread about RDNA2 GPUs consists of the same 5 users repeating the "but RT performance" and "what about DLSS" rethoric to derail it.
 
Last edited by a moderator:
Personally, I don't see why there's even big argument over about whether or not platform X,Y, or Z features a HW implementation of VRS ...

The only case a software VRS implementation can't cover compared to a hardware VRS implementation is when we have a pure forward renderer but almost none of the high-end renderers we see in modern AAA games with high graphical fidelity actually fall into this case. In the case of deferred renderers, VRS (HW or SW) usually isn't helpful either since they often face a fillrate bottleneck so we can mostly forget about the idea of implementing the technique altogether over there. AMD also suggests that you won't see much performance gain with VRS when you have high geometric complexity in scenes too which will probably become more common as we progress further into the new generation ...

You can implement SW VRS on top of a Deferred renderer or a Forward renderer as long as we do tile classification on either of them like we see in CoD: Modern Warfare reboot which features a Forward+ (AKA Tiled Forward) renderer. Tile classification has other benefits too when we combine it with ray tracing in our rendering pipeline where we can selectively trace as few fays as possible in a scene thus ideally maximizing the improvement to image quality in parts of the scene as needed whilst minimizing this cost of tracing the rays too ...

I think some people are overstating the potential performance benefits with a hardware implementation of VRS over a software implementation when the games we commonly benchmark won't necessarily fall into those ideal patterns. A hardware implementation for VRS will arguably see the most gains when we're rendering VR content with outdated forward renderers and we have low geometric complexity for our scenes too but I don't think many benchmarks will include VR content at all in the future ...
 
The PS5 definitely doesn't use DX12 Ultimate's VRS, nor DX12's Mesh Shaders, and Infinity Cache isn't there from looking at the X-rays from Fritz.
From Matt Hargett's comments, it seems Sony didn't see much of a value in DX12U's VRS, but it remains to be seen if they're just not making use of the functionality in their SDKs, or there's really not the same hardware capability as all the other RDNA2 GPUs. Maybe it is absent and they're doing it through software, or maybe they're implementing foveated rendering through hardware in a different way.

Microsoft has numerous patents around VRS type implementations, just one such patent is linked below. I could certainly see there being a chance that Microsoft patented VRS up to the eyeballs and then made it part of the DX12U spec, licensing it to the usual suspects (AMD, Nvidia and intel) for a nominal fee, on the condition that they arent used for semi-custom silicon solutions. It would certainly be a dirty move but it could explain the whole 'Only console with RDNA2' thing that Microsoft was spouting around launch

Microsoft Patent Describes How Xbox Series X Does Variable Rate Shading | SegmentNext
 
A hardware implementation for VRS will arguably see the most gains when
the developer doesn't want to roll their own software solution; as in they either don't bother at all or they use what's provided.

Having hardware adoption just means that there are options for developers.

We know geometry can be done on compute, but no one wants to do it and most would rather go the route of Primitive/Mesh Shaders to perform that function for instance.
 
Yea that's sort of my point. They are the only ones that we know of so far. Not many companies are (largely) bypassing the entire front end just yet.
 
Yea that's sort of my point. They are the only ones that we know of so far. Not many companies are (largely) bypassing the entire front end just yet.
But even then Epic are using the primitive shaders path for the bigger polygons because that is still more efficient than using their compute shaders method.
 
But even then Epic are using the primitive shaders path for the bigger polygons because that is still more efficient than using their compute shaders method.
yea, I don't think they can use it everywhere IIRC. I think Nanite is only for static geometry. I think object geometry is likely made with Primitive/Mesh Shaders.
The whole point is, just because software variants exist, doesn't mean developers will go for it. It's costly for studios to invest so much in an engine R&D team and build a game simultaneously. Only a handful of companies have this.
 
yea, I don't think they can use it everywhere IIRC. I think Nanite is only for static geometry. I think object geometry is likely made with Primitive/Mesh Shaders.
The whole point is, just because software variants exist, doesn't mean developers will go for it. It's costly for studios to invest so much in an engine R&D team and build a game simultaneously. Only a handful of companies have this.

I wouldn't doubt they're trying for all geometry on Nanite. Media Molecule can do it, skinned meshes too. And hey look at that RT performance, you've got RT shadows and ambient occlusion on a PS4 in realtime. Not too mention basically unlimited detail already, hey look more detailed models than Horizon Forbidden West, done by one person! The cost is just plain worth it if the studio has the technical chops to pull it off, the entire point is the normal front end is slow in comparison. And it is, why are you even drawing giant polygons if the idea is subpixel detail?

This "slow and gradual" mindset makes no sense to me. Replacing triangles works, and it works really, really well. The triangle pipeline has grown into a leviathan of a headache, how many representations do you need simultaneously? BVH, LODs, physics mesh, capsules for GPU animation tricks. How much of an art pipeline do you need to go through, how long would devs sit there tackling that? High res mesh, low res mesh, UV map, bake lods, bake normals, and now on top of normal material painting you get to do a ton of other parameter painting as well!

Or you can look at Dreams and it just... kinda works. Average users can use it, dedicated amateurs can make an entire Sonic game by themselves without any training. The time and cost savings potential are absolutely enormous. What's the argument for traditional pipeline, it's "familiar"? Don't take the "risk"? It seems like nonsense. Which is part of the reason why I see even more contraction of upper end game engines in the future. Yes, smaller studio with a hundred people still trying to make a high end game, you can make your own game engine. Or you can work through the pain of using UE5, or whatever it is Embark Studios is making, or etc. and probably save time and money and end up with a better looking game.
 
I wouldn't doubt they're trying for all geometry on Nanite. Media Molecule can do it, skinned meshes too. And hey look at that RT performance, you've got RT shadows and ambient occlusion on a PS4 in realtime. Not too mention basically unlimited detail already, hey look more detailed models than Horizon Forbidden West, done by one person! The cost is just plain worth it if the studio has the technical chops to pull it off, the entire point is the normal front end is slow in comparison. And it is, why are you even drawing giant polygons if the idea is subpixel detail?

This "slow and gradual" mindset makes no sense to me. Replacing triangles works, and it works really, really well. The triangle pipeline has grown into a leviathan of a headache, how many representations do you need simultaneously? BVH, LODs, physics mesh, capsules for GPU animation tricks. How much of an art pipeline do you need to go through, how long would devs sit there tackling that? High res mesh, low res mesh, UV map, bake lods, bake normals, and now on top of normal material painting you get to do a ton of other parameter painting as well!

Or you can look at Dreams and it just... kinda works. Average users can use it, dedicated amateurs can make an entire Sonic game by themselves without any training. The time and cost savings potential are absolutely enormous. What's the argument for traditional pipeline, it's "familiar"? Don't take the "risk"? It seems like nonsense. Which is part of the reason why I see even more contraction of upper end game engines in the future. Yes, smaller studio with a hundred people still trying to make a high end game, you can make your own game engine. Or you can work through the pain of using UE5, or whatever it is Embark Studios is making, or etc. and probably save time and money and end up with a better looking game.
Sure. I don’t disagree. I guess what I meant to write is that historically studios that are shipping games while advancing features tend to do it slower than teams that particularly just keep working on engine development 24/7.

As for UE5, that still takes in a triangle mesh as a model which is different I think from Dreams which I believe uses a SDF model and then rasterizes triangles from the SDF model at render.

But since we don’t have a lot of SDF model builder everyone is using the one found within dreams. So content creation is much slower than that compared to studios using professional tools.
 
FidelityFX was just announced for XSX/S and AMD has a dedicated page for it with the features it supports atm: https://gpuopen.com/xbox/#fidelityfx
It is being added to the Xbox GDK.
The fact that it requires VRS is maybe why they haven't released it for PS5 as well.
This was kinda the point of my initial post for this thread.
If VRS is a part of AMDs FidelityFX, and PS5 doesn't have VRS, then how would this be implemented on PS5. The fact that it isn't available on PS5 at the same time might point towards this.
 
I'm a little out of the loop.
Wasn't vrs and all already available on Series before FidelityFX?
And why there's still polemics about geometry occlusion vs vrs on the PS5? Isn't supported on Series too?
 
Last edited:
I'm a little out of the loop.
Wasn't vrs and all already available on Series before FidelityFX?
It's fidelity fx VRS which is just a module of the fidelity fx toolkit.

Is not replacing DX or anything, it's a set of useful features on top of DX12U.

So developer doesn't have to use fidelity fx to access vrs.
 
Wasn't vrs and all already available on Series before FidelityFX?
Yes, there's VRS implemented since launch-date games for the series consoles.

The difference here is an effort to unify the development tools at least between the Xbox Series and Windows PC, which in the long run it might be great for getting PC games properly optimized for AMD dGPUs (an area where AMD is traditionally much weaker than Nvidia).
It has nothing to do with hardware capabilities, of course. For example Vega and Polaris GPUs take advantage of a number of FidelityFX technologies and neither of them support VRS. FidelityFX is just the name attributed to the set of tools created by the GPUOpen initiative.
 
Back
Top