AMD FidelityFX on Consoles

kill off DLSS2

Buying up studios and have everything multiplat instead of greedy exclusives might be a way to kill off Playstation.

That would maybe leave PS5 out

Perhaps, just like with dolby atmos, the PS5 had to have this exotic audio solution no one outside first party studios will utilize (and have a less impressive effect to boot).

Xbox's auto HDR uses machine learning. That means every Xbox One game running on Series consoles are commercially available games that use ML.

PS4 is doing ray tracing in some games, it has hardware featured ray tracing, case closed.

what this have to do with anything discussed here?

Indeed, in the same vein, using ML for a subtle thing like muscle deformation on a character as opposed to reconstructing an image from 1080p all the way to 4k is a total different thing.
 
I think that the argument is not what hardware do you need to do ML, but what kind of hardware enables you to do it in a way efficient enough to run it alongside gaming code.
running ML alongside game code as in parallel? I think only tensor cores can do that.

If you mean being able to run ML inference at fast enough speeds, the main attributors to making models run faster are the size of the model and any sort of feature engineering that needs to be done prior to running it through the model. If none is required, that is ideal. Then you're just looking at the algorithm it's running and how many layers there are for it to provide an acceptable return. Once you figure out the absolute minimums, you can choose to encode it or mix encode it to lower precision and see if you lose any more accuracy.

And this is where int8 would come in. And if for some crazy reason you can encode it further down to int4 and not lose any accuracy, then go for it.

Afterwards, it's all about the number of available ALU you have to run the network. More available threads means you can process larger networks faster (not true necessarily) - so if you have for some reason, and I'm just throwing random numbers, if you wanted to transform a 1080p image into a 4K image, your input threads will be 2.0M pixels long. Your output will be 8.3M pixels out. To run the later layers of the NN in a single shot (all threads being processed at the same time), requires you to have 8.3M threads available. I'm not even talking about the calculations that are going to happen right, just threads. And even maxing out the most number of threads may not be ideal for performance, but that's a different topic.

So basically you're going to have to run multiple cycles per layer to get through all those pixels before you're allowed to move onto the next layer. NN are serialized in this fashion, there are many nodes in a layer that needs processing, but you can not process the next layer without the previous layer being completed. So having wide GPUs and bandwidth will matter in this type of thing. Then you can start factoring in int8/int4 encoding and mixed precision processing to improve your performance further.

This is the way deep learning works, but this isn't how all machine learning algorithms work. Without knowing what the model is (it may not be a NN) and without knowing what encoding schemes are being used, if any at all, there may be no advantages that XSX holds over PS5 except in available ALU and bandwidth to feed them because not all ML algorithms will be leverage those features (int4/8 and mixed precision dot products)
 
Well, nVidia now have a generalized process that doesn't require training for every game. So that would leave hypothetical training for XSX, PS5, and PC. But I would assume that training done for an AMD GPU on XSX would also apply to RDNA2 GPUs on PC since they are both using DX12U. That would maybe leave PS5 out, but.... I think the point of the Fidelity FX suit is that you have compatibility within that suite. As in, if you train an AI for Fidelity FX's upscaling, that training model would work across all GPUs that support Fidelity FX upscaling. And that assumes a need to train for every game. Or that AMD's upsampling solution requires, or even uses ML.

Even if we go with the assumptions that it's ML based, and that PS5 has a deficiency in ML performance, and that you have to train per game, that doesn't mean that it can't perform the upscaling, nor does it mean that upscaling from a lower resolution with the overhead of upscaling would be less performant than native rendering. Which means that it's still a useful tool and developers will use it if it's easy enough to implement and performs well.

Also, chaining together a bunch of assumptions is a guarantee for success. Trust me, I looked it up on reddit.
So is there some proprietary IP for the training?
I mean if Microsoft is using its super computers to do the training, why would they then want or allow the results to be used to maybe help say a PS5 game out?
Same with Nvidia. If they are using their super computers to do all the hard lifting for ML, why would they allow that to help AMD cards out?
 
And yet PlayStation is the only current console platform that actually has machine learning being performed on a commercially available game.

There's more evidence of it being there than it being absent.
As others have pointed out, you can do ML on your smart phone.
The question is about if the PS5 has int8 and int4 lower precision.
It had half precision FP16 on the PS4 Pro, and its not a hard guess to think it has so again on the PS5.
You can use FP16 for ML and have an advantage over using full precision.
With the XSX we have Microsoft saying in record that they ADDED the hardware required to get int8 and int4. They have outlined the specs for that. The fact they said they added it tends to indicate that it wasn't a stock feature. They didn't say they added hardware to get Ray Tracing on the Series X, because it was a stock RDNA 2 feature.
We have David Cage saying that the XSX is going to be better at ML than the PS5 because its shader cores are more suited for it. I wonder what ever he could mean by that....
https://www.google.com/amp/s/wccfte...-powered-shader-cores-says-quantic-dream/amp/

You also have the PS5 engineer say that the PS5 didn't have the ML added into it.

And let's not forget, Microsoft has a track record of highlighting features on the XSX that the PS5 lacks. From pointing out that their SOC was fixed clocks to Digital Foundry before it was announced by Sony that theirs was variable, to highlighting VRS, Mesh Shaders and SFS for the XSX knowing the PS5 lacked them. MS has also highlighted the int8 and int4 capabilities of their console.

Sony has also highlighted things that it has over the XSX such as Cache Scrubbers and the faster SSD.

We spent 6 months hearing from Sony fanboys that the PS5 had VRS, Mesh Shaders and SFS because it was an RDNA 2 feature, and as the PS5 was RDNA 2 it must have them by default. They held on for grim death with that until it became overwhelming obvious that it wasn't the fact. We still have hang outs that thing the PS5 has VRS because of the VR patent and that Primitive Shaders and the GE on the PS5 are the same as Mesh Shaders, even when AMD had the GE and Primitive Shaders in their older GPUs.
 
Yes and it looks wonderful *ahem*
Just because you do or don't care for a result doesn't mean that it's not relevant to the conversation regarding machine learning on consoles.

So is there some proprietary IP for the training?
I mean if Microsoft is using its super computers to do the training, why would they then want or allow the results to be used to maybe help say a PS5 game out?
Same with Nvidia. If they are using their super computers to do all the hard lifting for ML, why would they allow that to help AMD cards out?
I was assuming that AMD would do the training, since we are discussing "AMD Fidelity FX On Consoles" and not anything specific the Xbox or Microsoft. So we can add that assumption as another link in the assumption chain for guaranteed success. But it is possible that Microsoft have a specialized solution that leverages Series hardware. And again, if it's easy enough to implement, and offers performance or IQ benefits, developers will use it.
 
Sorry, I hope this isn't too OT...

But my understanding of the Auto-HDR feature,
It is simple a custom SDR - HDR transform applied on a game by game basis, and that while it is TRAINED by ML,
The training and ML is simply to get the right values for the colourspace conversion, that forms the SDR ->HDR transform.

Whats more, i think, this happens in the Output block of the Series consoles, and not strictly within the GPU itself.
Which is very different to what we are talking about regarding using "traditional" GPU resources to implement an AI inference based up-scaling method.

I actually dont know how much separation there is in the current gen consoles between the GPU silicon and the "Output Block" ( which i would consider to include the hdmi encoder).

I know in the past i have speculated that by combining the use of GPU based upscaling and intelligent work in the output block, might leed to the best possible result for any upscaling system,
This would be simply due to the powerful and flexible nature of the output block on series consoles.
 
As others have pointed out, you can do ML on your smart phone.
The question is about if the PS5 has int8 and int4 lower precision.
It had half precision FP16 on the PS4 Pro, and its not a hard guess to think it has so again on the PS5.
You can use FP16 for ML and have an advantage over using full precision.
With the XSX we have Microsoft saying in record that they ADDED the hardware required to get int8 and int4. They have outlined the specs for that. The fact they said they added it tends to indicate that it wasn't a stock feature. They didn't say they added hardware to get Ray Tracing on the Series X, because it was a stock RDNA 2 feature.
We have David Cage saying that the XSX is going to be better at ML than the PS5 because its shader cores are more suited for it. I wonder what ever he could mean by that....
https://www.google.com/amp/s/wccfte...-powered-shader-cores-says-quantic-dream/amp/

You also have the PS5 engineer say that the PS5 didn't have the ML added into it.

And let's not forget, Microsoft has a track record of highlighting features on the XSX that the PS5 lacks. From pointing out that their SOC was fixed clocks to Digital Foundry before it was announced by Sony that theirs was variable, to highlighting VRS, Mesh Shaders and SFS for the XSX knowing the PS5 lacked them. MS has also highlighted the int8 and int4 capabilities of their console.

Sony has also highlighted things that it has over the XSX such as Cache Scrubbers and the faster SSD.

We spent 6 months hearing from Sony fanboys that the PS5 had VRS, Mesh Shaders and SFS because it was an RDNA 2 feature, and as the PS5 was RDNA 2 it must have them by default. They held on for grim death with that until it became overwhelming obvious that it wasn't the fact. We still have hang outs that thing the PS5 has VRS because of the VR patent and that Primitive Shaders and the GE on the PS5 are the same as Mesh Shaders, even when AMD had the GE and Primitive Shaders in their older GPUs.

You're a broken record, @invictis. I suggest dropping this persistent rumour of yours, it's really going nowhere other than round and round in circles.

It's happened several times before now in this very thread and frankly it's boring to repeatedly go through it every few pages.

We spent 6 months hearing from Sony fanboys

You realise the majority of people on B3D don't care for the platform warring and bias? You own an Xbox, good for you - they're good consoles. Just drop the boring "Microsoft has X feature, Sony does not". You've done it before, and again, and again. We heard you, it's acknowledged that there may be differences in the ML implementation.

Let's now check out how the different implementations pan out.
 
Just because you do or don't care for a result doesn't mean that it's not relevant to the conversation regarding machine learning on consoles.


I was assuming that AMD would do the training, since we are discussing "AMD Fidelity FX On Consoles" and not anything specific the Xbox or Microsoft. So we can add that assumption as another link in the assumption chain for guaranteed success. But it is possible that Microsoft have a specialized solution that leverages Series hardware. And again, if it's easy enough to implement, and offers performance or IQ benefits, developers will use it.
Yeah AMD could take the lead on that for sure.
But we still have AMD doing the training for their GPUs, Nvidia for DLSS, no doubt Intel will have some sort of ML on their new cards as well.
But note, this is me asking questions when I'm not aware of just how much is involved to do the AI training on a game. I am assuming its quite complex, otherwise every game would have DLSS upscaling as a boost.
I am then also assuming the way Nvidia goes about their training might well be different to how AMD would do it.

So with all those assumptions, I may well be off the track.
 
You're a broken record, @invictis. I suggest dropping this persistent rumour of yours, it's really going nowhere other than round and round in circles.

It's happened several times before now in this very thread and frankly it's boring to repeatedly go through it every few pages.
What rumour are you talking about? Missed that.
If you are referring to lower precision hardware in PS5, there's no rumour about that?
We know MS added it to the XSX. Sony has yet to state they have it. Devs have come out and said the XSXs shader cores are more suited to ML than the PS5, which they wouldn't do if they both contained the same tech now would they. The only word on the matter from anyone at Sony was one of their engineers who said it didn't have the ML stuff on it.
The best people like you say is that some AMD cards had it as well.
If you don't like the discussion about what features are or arnt in the consoles, maybe don't quote people and challange them on it?
Its an opt in system really.

You realise the majority of people on B3D don't care for the platform warring and bias? You own an Xbox, good for you - they're good consoles. Just drop the boring "Microsoft has X feature, Sony does not". You've done it before, and again, and again. We heard you, it's acknowledged that there may be differences in the ML implementation.

Let's now check out how the different implementations pan out.
Most people on B3D don't system war that's correct. Most like to get to the truth about what a system does or can do. Nowhere did I say the XSX is a better machine than the PS5 because it has it, infact you can read where I said I don't think anything will come of the ML features on the XSX. I don't think it will be adopted in any real form, and maybe, just maybe, one of MS own studios might play around with it.
I'm the worst Xbox fanboy in the world talking like that.

My point about the Sony Fanboys was that just like the ML, they held on to the PS5 having VRS for instance because it was a RDNA 2 feature, so it must be on the PS 5. The reality is that neither the PS5 nor XSX are full RDNA 2. They are both custom chips. That doesn't mean one is better than the other. From what I understand, there was no performance gains on RDNA 2 vs RDNA 1. The changes were around RT, VRS, Mesh Shader and SFS additions, and improving power efficiencies.

If it makes you feel better, I'm happy to talk about Cache Scrubbers and how they are on the PS5 and not on the XSX, and I would love to dig down and find out just what these feature will mean, and what sort of performance advantage the PS 5 can expect from it.

There is no point talking about tech if you want to stear away from.talking about any advantages it gives. It's the whole reason why Nvidia, Sony, AMD and Microsoft introduce these features into their products, and consumers like us should enjoy talking about exactly how these changes work.
 
.
Basically, all of us talking about it are the equivalent of Dogs (color blind) discussing whether two objects in front of us is Blue or Not Blue. Some helpful human walked by and said, the right one is Blue so we know it's Blue. However, no one has walked by to tell us whether the left one is Blue or Not Blue. Now, how do we as Dogs that can't see color determine if the left one is Blue or Not Blue? :)

By inference, the dogs were discussing whether one, or more, of the blocks were blue. A human stops and says the one on the right is Blue. So by inference the left one is not Blue otherwise the human would have said they are both Blue and not just picked out one.

I'm not disagreeing with your point, just saying being OCD about the example :D
 
So is there some proprietary IP for the training?
I mean if Microsoft is using its super computers to do the training, why would they then want or allow the results to be used to maybe help say a PS5 game out?
Same with Nvidia. If they are using their super computers to do all the hard lifting for ML, why would they allow that to help AMD cards out?
Nvidia doesn't and won't share it.

AMD would, simple example is fidelity fx, its open source and can run on Nvidia cards also.

Some of the reaons behind the difference in approach is where they are in the market.
AMD due to market share needs as much uptake and usage of their tools as possible, as using Nvidia toolkit locks them out of it and leaves them at a disadvantage.

In regards to MS they can approach it a few different ways, or all.
  • Make it free for use with DX12U, this would lockout Vulcan and PS5 and give additional reasons for a studio to use DX.
  • Put it in something like playfab. So that can be used on Vulcan and PS5 and the more people use features from playfab the more likely they are to use other features that it has to offer like azure, basically end up upselling.

Playfab is an MS suite that can be used by anyone including Sony.

Models being able to run everywhere doesn't mean its free to.
 
Last edited:
Yeah AMD could take the lead on that for sure.
But we still have AMD doing the training for their GPUs, Nvidia for DLSS, no doubt Intel will have some sort of ML on their new cards as well.
But note, this is me asking questions when I'm not aware of just how much is involved to do the AI training on a game. I am assuming its quite complex, otherwise every game would have DLSS upscaling as a boost.
I am then also assuming the way Nvidia goes about their training might well be different to how AMD would do it.

So with all those assumptions, I may well be off the track.
While I don't think nVidia isn't going to continue working on DLSS, the current implementation does not require per game training. I think they use their market position and marketing deals to leverage developers into supporting it. We don't know if the AMD solution will require per game training, but recent comments from them hint that FidelityFX Super Resolution might not even use ML.

Nvidia doesn't and won't share it.

AMD would, simple example is fidelity fx, its open source and can run on Nvidia cards also.

Some of the reaons behind the difference in approach is where they are in the market.
AMD due to market share needs as much uptake and usage of their tools as possible, as using Nvidia toolkit locks them out of it and leaves them at a disadvantage.

In regards to MS they can approach it a few different ways, or all.
  • Make it free for use with DX12U, this would lockout Vulcan and PS5 and give additional reasons for a studio to use DX.
  • Put it in something like playfab. So that can be used on Vulcan and PS5 and the more people use features from playfab the more likely they are to use other features that it has to offer like azure, basically end up upselling.

Playfab is an MS suite that can be used by anyone including Sony.

Models being able to run everywhere doesn't mean its free to.
I'm pretty interested to see how Microsoft's solution is different than AMD's solution. We know Microsoft is working on ML projects for DirectX, and we know AMD is working on an upscaling solution to compete with DLSS, and obviously they partner in plenty of projects. But people talk as if Fidelity FX Super Resolution is not going to be a separate thing from a DirectML upscaler evangelized by Microsoft. I think everything Microsoft has shown about their solution was some Forza footage, and IIRC it was running on nVidia hardware.
 
I'm sure there are many developers that simply don't want to spend the time incorporating DLSS into their game, or their game engine has issues that prevent DLSS integration. There's probably several reasons why most games don't have DLSS beyond technical as well as we're not really privy to Nvidia's requirements. It makes sense though for as many developers integrate it as possible, provided it's worth their time, and their game actually benefits from it (ie. the game's graphics are demanding at higher resolutions)

A solution by AMD would likely suffer worse than DLSS on PC for widespread use, even if it's open.
 
While I don't think nVidia isn't going to continue working on DLSS, the current implementation does not require per game training. I think they use their market position and marketing deals to leverage developers into supporting it. We don't know if the AMD solution will require per game training, but recent comments from them hint that FidelityFX Super Resolution might not even use ML.


I'm pretty interested to see how Microsoft's solution is different than AMD's solution. We know Microsoft is working on ML projects for DirectX, and we know AMD is working on an upscaling solution to compete with DLSS, and obviously they partner in plenty of projects. But people talk as if Fidelity FX Super Resolution is not going to be a separate thing from a DirectML upscaler evangelized by Microsoft. I think everything Microsoft has shown about their solution was some Forza footage, and IIRC it was running on nVidia hardware.
I personally would hope AMD and MS are working independently, and I kind of expect that to be the case.
Different solutions and options are good.
AMD non ML based, MS ML based.

A solution by AMD would likely suffer worse than DLSS on PC for widespread use, even if it's open.
Why do you think this?
Are you talking about if AMD's solution is ML based also, even so I'm unsure why you would have that view.

They would need to go the same route as Nvidia and get it incorporated into unity and unreal.
 
Why do you think this?
Are you talking about if AMD's solution is ML based also, even so I'm unsure why you would have that view.
Nvidia have 80% market share and a large install base of Turning/Ampere GPUs, yet DLSS, even without the need to train per game anymore, has a relatively small amount of integrations. Nvidia also historically have far larger dev relations than AMD. Assuming the need for RDNA1/2 for an AMD solution, I don't see why suddenly we'd see a lot more adoption of an open solution when adding in a relatively very small addition of RDNA-based GPUs into the potential user base.
They would need to go the same route as Nvidia and get it incorporated into unity and unreal.
Yeah, definitely this should be the big priority at the start.
 
Back
Top