AMD FidelityFX on Consoles

His statement is a bit oversimplified, though. "Doesn't have any ML stuff" just means no specific hardware for ML, but ML has been GPU accelerated for some time now on AMD hardware older than RDNA. Just because there is no hardware specifically there to accelerate ML doesn't mean that the GPU couldn't do it at all, and probably fast enough to make it worthwhile.
True, I'm not trying to say the PS5 couldn't do some ML via its FP16 compute, I'm sure it can, its just whether it would give much if any tangible results. We know DLSS for instance uses lower precision 8 bit and 4 bit for its upscaling, but it appears that Sony, unlike MS, didn't make changes to the GPU to get these lower precision abilities.

We also know that MS studios have been doing alot of work on upscaling via ML, we know that MS has the hardware, and it has developed an API extension for it.

To be honest, I'm not sure if MS will see much ML on the XSX of any real value, and MS is ahead of Sony on that point.
 
VRS on AMD cards is tied to Direct X 12 U, and no other API.
This is irrelevant, and will be supported if not already in Vulcan. Just won't be called DX VRS.
We also know that MS studios have been doing alot of work on upscaling via ML, we know that MS has the hardware, and it has developed an API extension for it.
Actually we don't know this.
They used ML upscaling to demo Direct ML, thats far from an R&D team, or their studios actually doing any work on it.
The actual hard part is the models that they got from Nvidia for the demo.

What we do know is that their actively working on ML texture de/compression.

If asked I would say I suspect they are working on ML upscaling, but that's just a personal opinion not a fact.
 
1) It's a deleted tweet, and for good reason. The Xboxes also have Navi-based GPUs and they have specific instructions for NN inference. You're also lacking context on what he means by "any ML stuff". For example, no console has specific processing hardware for mixed precision matrix multiplication, like AMD's CDNA1 or Nvidia's Volta/Turing/Ampere.

2) You really need to look up on what types of hardware are capable of running Neural Network Inference. Your "doesn't have Machine Learning" statement is still completely incorrect.


As for Locuza's tweets, the only fact he can 100% confirm is that the PS5's die area dedicated to the ROPs is larger. There's no capability or lack of thereof being proven out of this.
On the contrary PS5 has actually the same number of color ROPs but double stencil and z-ROPs (and that's without the clocks advantage).
 
On the contrary PS5 has actually the same number of color ROPs but double stencil and z-ROPs (and that's without the clocks advantage).
Something I've been curious about.
I believe the PS5 has about a 20% fillrate advantage.

But does it also support the new HDR format that XS does. That is supposed to give around a 50% performance improvement.
Unsure how that improvement plays out in the overall fillrate.
 
Something I've been curious about.
I believe the PS5 has about a 20% fillrate advantage.

But does it also support the new HDR format that XS does. That is supposed to give around a 50% performance improvement.
Unsure how that improvement plays out in the overall fillrate.
Where did you get that? That's the first time I hear about it.
 
Where did you get that? That's the first time I hear about it.
202008180228381.jpg


Saving in BW and blend time not actual fillrate performance.
But I suspect there too be some correlation to overall performance, especially with blend.
 
They wernt tweets, they were private messages that were then released.
Are you serious? You literally posted the screenshot of a tweet to back up your argument but now you're saying it wasn't a tweet?

Sony never said the PS5 could do VRS.
They didn't announce their GPU ALUs can do FP32 MADD operations either. They do have half a dozen published patents on foveated rendering since 2016, though.
However they're doing foveated rendering, it's not the same way as DX12 Ultimate's VRS. You have no idea how foveated rendering is implemented in Sony's GPU, other than it's different from Microsoft's.

This constant "push for victory" on VRS is puzzling, except for Microsoft's marketing material being successful at convincing people that a) Sony isn't capable of doing foveated rendering their own way despite all the patents they released on the subject and b) it's a super important tech / secret sauce whose performance gains are yet to be seen in any relevant means.

And then it also seems that Sony going silent on lower-level specs of their GPU must mean it's an inferior GPU that lacks all RDNA2 features. Except for raytracing that Cerny happened to have to confirm three times on several interviews (otherwise some would still believe the PS5 was doing RT through the CPU or something).


We know DLSS for instance uses lower precision 8 bit and 4 bit for its upscaling
Care to point to any official statement corroborating this?


it appears that Sony, unlike MS, didn't make changes to the GPU to get these lower precision abilities.
Microsoft didn't change the GPU to get "lower precision abilities". Higher throughput RPM of INT8 and INT4 has been in all RDNA GPUs except the first Navi 10. It's on the RDNA1 whitepaper.
It would actually take Sony to make changes to the GPU to not have 4xINT8 / 8xINT4 throughput. Either they did that or not is up for confirmation (or faith, apparently).
It'd be best to stop conflating Microsoft's declared "ML enhancements" with a feature that has been present in all RDNA GPUs released since October 2019.


If asked I would say I suspect they are working on ML upscaling, but that's just a personal opinion not a fact.
If this is about FSR, then from Scott Herkelman's statements it looks like AMD's new upscaling technique won't use ML at all..
 
202008180228381.jpg


Saving in BW and blend time not actual fillrate performance.
But I suspect there too be some correlation to overall performance, especially with blend.
A new software format? For doing what exactly? Just a new vague compression and that will be enough to compensate because of their hardware deficiency with their ROPs vs PS5? BTW XSX has the same number of color ROPs (blending), half number of stencil and depths ROPs.
 
A new software format? For doing what exactly? Just a new vague compression and that will be enough to compensate because of their hardware deficiency with their ROPs vs PS5? BTW XSX has the same number of color ROPs (blending), half number of stencil and depths ROPs.
I have no idea how much difference it will make as I don't know the overall affect it will have once that format is used. Which I said.
They've given some figures, but it would take someone far more knowledgeable than me to parse it and say what effect it will have.

What do you mean by software format? It's a new(as far as I know) hardware supported format.

I raised it, as like I said curious.
 
This is irrelevant, and will be supported if not already in Vulcan. Just won't be called DX VRS.

Actually we don't know this.
They used ML upscaling to demo Direct ML, thats far from an R&D team, or their studios actually doing any work on it.
The actual hard part is the models that they got from Nvidia for the demo.

What we do know is that their actively working on ML texture de/compression.

If asked I would say I suspect they are working on ML upscaling, but that's just a personal opinion not a fact.
They are working on ML-based Super Resolution. They have mentioned it in pretty much every material they have released about ML on the XSX/S. They also mentioned it explicitly in an article from DF.
 
They are working on ML-based Super Resolution. They have mentioned it in pretty much every material they have released about ML on the XSX/S. They also mentioned it explicitly in an article from DF.
You'll have to dig that up for me if you can find an example of the few you've mentioned.

I've seen them say it can be used for things like ML upscaling.
That's far from them working on it.

In the past they believed give the tools to developers and they'll do the implementations.
Where they have to do more leading from the front in this space. Something I suspect their doing.
 
They are working on ML-based Super Resolution. They have mentioned it in pretty much every material they have released about ML on the XSX/S. They also mentioned it explicitly in an article from DF.

At the same time, AMD has openly stated they're releasing FSR to be used in both discrete GPUs and the new-gen consoles, while suggesting it's not ML-based.
If FSR happens to be more effective than Microsoft's ML-based upscaling, then Microsoft won't suggest using their solution just for the sake of using ML inference. At least I don't think they'd be irrational at that point.

Scott Herkelman seemed pretty bullish on FSR's expected results BTW.
 
At the same time, AMD has openly stated they're releasing FSR to be used in both discrete GPUs and the new-gen consoles, while suggesting it's not ML-based.
If FSR happens to be more effective than Microsoft's ML-based upscaling, then Microsoft won't suggest using their solution just for the sake of using ML inference. At least I don't think they'd be irrational at that point.

Scott Herkelman seemed pretty bullish on FSR's expected results BTW.
I think whatever AMD is doing has no bearing on what MS is doing. Both solutions can be available to developers to make the best choice based on how it suits their needs. We already have a variety of image reconstructions available. The availability of one method does not preclude the development of others.
 
You'll have to dig that up for me if you can find an example of the few you've mentioned.

I've seen them say it can be used for things like ML upscaling.
That's far from them working on it.

In the past they believed give the tools to developers and they'll do the implementations.
Where they have to do more leading from the front in this space. Something I suspect their doing.
From this DF article: https://www.eurogamer.net/articles/digitalfoundry-2020-xbox-series-s-big-interview "There were some further topics I raised in the follow-up interview. First of all, both Series S and Series X GPUs feature support for INT-4 and INT-8 extensions on the shaders, opening the door to machine learning applications. The HotChips presentation mentioned AI upscaling as a use for this. "It's an area of very active research for us, but I don't really have anything more to say at this point," Goossen said".
 
  • Like
Reactions: Jay
From this DF article: https://www.eurogamer.net/articles/digitalfoundry-2020-xbox-series-s-big-interview "There were some further topics I raised in the follow-up interview. First of all, both Series S and Series X GPUs feature support for INT-4 and INT-8 extensions on the shaders, opening the door to machine learning applications. The HotChips presentation mentioned AI upscaling as a use for this. "It's an area of very active research for us, but I don't really have anything more to say at this point," Goossen said".
Thanks, forgot about that.
I always find those kind of answers, their not going to say no, looking into everything, and don't ask about it.

Although I still believe that the lots of places people seem to think it was said it wasn't.
But as I said i personally think they are, and that quote is reason enough for others.
 
They didn't announce their GPU ALUs can do FP32 MADD operations either. They do have half a dozen published patents on foveated rendering since 2016, though.
However they're doing foveated rendering, it's not the same way as DX12 Ultimate's VRS. You have no idea how foveated rendering is implemented in Sony's GPU, other than it's different from Microsoft's.
Maybe one of every 1000 patents actually end up going into actual practice.
The patent for Foveated Rendering is in relation to VR, not the PS5.
There was a Sony patent that showed the console having unified L3 Cache in the CPU, which sent all the fanboys into a rage saying that the PS5 CPU was Zen 3 and had unified cache, when in reality the die shots show that it doesn't have unified cache at all.
Here is a patent for a time travel machine, so I guess it must be legit.
https://patents.google.com/patent/US20090234788A1/en
This constant "push for victory" on VRS is puzzling, except for Microsoft's marketing material being successful at convincing people that a) Sony isn't capable of doing foveated rendering their own way despite all the patents they released on the subject and b) it's a super important tech / secret sauce whose performance gains are yet to be seen in any relevant means.
The constant "push for victory" as you put it is to correct misinformation that was put out there. People ran with the PS5 having VRS because it was apparently a core function of Rdna 2, and so by default the PS5 must have it.
Well, as we have found out neither the PS5 nor XSX are full RDNA 2, and are actually made up of some RDNA 1 parts, and some RDNA 2 parts, and the PS5 is using more RDNA1 parts than the XSX is. It was a fawed conclusion.
It is obvious to anyone with an ounce of logic that the PS 5 does not have VRS. From the fact that Sony didn't list it as a feature, to the fact that the PS5 uses older RDNA 1 ROPs which don't have the alterations to perform VRS, when the XSX does have the newer RDNA 2 ROPs with VRS hardware. You then have devs who have spoken about how they are using a software solution for VRS on the PS5, which is similar to what they were using on the PS4 while they are using the hardware VRS on XSX. Then we have game released that use VRS on XSX but not on PS5. AMD have also tied in VRS to Direct X 12 Ultimate and no other API.
You really have to be willfully ignorant at this point to continue thinking that the PS 5 has VRS.
And then it also seems that Sony going silent on lower-level specs of their GPU must mean it's an inferior GPU that lacks all RDNA2 features. Except for raytracing that Cerny happened to have to confirm three times on several interviews (otherwise some would still believe the PS5 was doing RT through the CPU or something).
Sony has spoken about their main GPU features including their Cache Scrubbers, their GE and their Primitive Shaders. Until they say they have other features it's most likely that they don't.
Using your logic, Microsoft hasn't said the XSX doesn't have Cache Scrubbers so it does have them until they say otherwise.

The reality is that the PS 5 does lack alot of Rdna 2 features. It lacks Mesh Shaders, VRS, SFS and infinity cache for starters. The XSX also lacks some RDNA 2 features like infinity cache as well.
The fact is that Microsoft and AMD actually did alot of common work together on RDNA2 and Direct X 12 Ultimate. Go on AMDs site and see how much Dx12 Ultimate is on there. The only API that AMD supports for Mesh Shading, VRS and Sampler Feedback on RDNA2 is DX 12.
That may be the reason why PS 5 doesn't have them, or it could be that like MS said they had to wait longer to get those functions and maybe Sony didn't want to wait. Or maybe Sony just didn't see the value in them. Who knows.

I'm not sure why you have taken the whole "fanboys were saying the PS5 RT wasn't hardware based" so personally. That's just the ramblings of 12 year old console warriors. On the other side you have Sony nuts saying the XSX 12 tflops were GCN flops and not RDNA flops. It's all just sillyness.

Care to point to any official statement corroborating this?
I'm not sure Nvidia go around putting out official statements on everything they do, but there's plenty of articles on them using Int8 and Int4 and the benefits they get over higher precision.
https://developer.nvidia.com/blog/int4-for-ai-inference/
Microsoft didn't change the GPU to get "lower precision abilities". Higher throughput RPM of INT8 and INT4 has been in all RDNA GPUs except the first Navi 10. It's on the RDNA1 whitepaper.
It would actually take Sony to make changes to the GPU to not have 4xINT8 / 8xINT4 throughput. Either they did that or not is up for confirmation (or faith, apparently).
It'd be best to stop conflating Microsoft's declared "ML enhancements" with a feature that has been present in all RDNA GPUs released since October 2019.
The RDNA white paper refers to Int8 texture sampling which has nothing to do Int8 for compute.
Also, not all RDNA cards had int8 and int4 capabilities, it was only on some cards. It was not a stock feature found in all of them. That's the same situation with RDNA 2. Microsoft themselves said they added the customization to their GPU, meaning it was base across all cards.
There is no word that Sony also added it.

How about you send me some confirmation from Sony that the PS5 GPU has Int8 and Int4 precision then?
Silence from Sony tends to indicate they don't have something. They mentioned FP16 on the PS4 Pro, and I would have expected they would have said it had it if it did.
But like I said, I look forward to you providing me with the offical confirmation that the PS5 has it.

The only confirmation is that it doesn't have Machine Learning by one of its principal engineers.
 
Last edited:
The only confirmation is that it doesn't have Machine Learning by one of its principal engineers.
I am very curious about this. DLSS requires Machine learning.
FidelityFX is the equivalent of DLSS and supposedly coming to both platforms. But we know the PS5 doesnt have ML. So how is it supposed to have it?
 
There isn't a rule that every up-scaling technique must be ML. The idea that it needs ML is just proof that marketing can work wonders.

Insomniac's method is bonkers good and its being done on 36CU rdna.
 
  • Like
Reactions: snc
There isn't a rule that every up-scaling technique must be ML. The idea that it needs ML is just proof that marketing can work wonders.

Insomniac's method is bonkers good and its being done on 36CU rdna.
was also done on 36cu gcn ;)
 
Back
Top