AMD FidelityFX on Consoles

come on -- this forum is so silly. Everyone here should understand what hardware acceleration vs software means. It's completely unknown how much of a hardware accelerated advantage (if any) the xsx has over the ps5 on machine learning, but that's what's being speculated about. You can do machine learning inference on a toaster cpu -- it's just a question of how fast, and how much it competes with other resources.

If
(big if!) the xsx has some powerful hardware accelerated machine learning advantage, that'd be very significant for a dlss-like situation, where you need to do a lot of ML calculation but it defeats the purpose if you're competing with your own renderer for gpu time.
Exactly. The question is if the PS5 has lower precision int8 and int4 to accelerate ML. We know the PS5 has FP16, and that gives half precision.
At this point unless Sony clarifies otherwise, the PS5 is lacking the same hardware changes the XSX has for lower precision.
 
And the thing is this, we may well be arguing over something that will be of little effect anyway.
Even though the Series X has lower precision abilities, and it has a proven ML API in Direct ML, and we know that certain Xbox Game Studios have been working with ML to increase resolution in game such as in Forza Horizon, I don't know of we will see anything of any real effect come to the console because of it.
Nvidia GPUs don't have the same set up as XSX for ML, and is more powerful than the Xbox with ML. We also have the PS5 lacking the same functions as the XSX, so what devs are going to spend the time to exploit the ML on the XSX that isn't going to flow over to Nvidia GPUs, nor any of the older GPUs in PC'S which are the majority, nor the PS5?
It seems like if anyone was going to use it it would be Xbox Studios, and then again with so many more hardware additions to use like Mesh Shaders and SFS, why look at ML now?
 
Nvidia GPUs don't have the same set up as XSX for ML, and is more powerful than the Xbox with ML. We also have the PS5 lacking the same functions as the XSX, so what devs are going to spend the time to exploit the ML on the XSX that isn't going to flow over to Nvidia GPUs, nor any of the older GPUs in PC'S which are the majority, nor the PS5?
It seems like if anyone was going to use it it would be Xbox Studios, and then again with so many more hardware additions to use like Mesh Shaders and SFS, why look at ML now?
This is where developer facing tools become more important than the technobable of specs. If a middleware developer embraces a feature like ML for certain features (upscaling, deformation, etc) then it can be more easily supported across a wider swath of games. If Microsoft provided a ML upscaler with their development tools that was easily implemented, that would probably have a high adoption rate as well. If it's relatively easy and offers tangible benefits to image quality or performance, developers will use it. Nothing except PS4 pro had hardware for CBR (well, PS5 now as well), but there were still a bunch of games that used it. And PS4 pro has a fairly small install base.
 
New RE8 Demo lists the following technologies in use for PS5:
FidelityFX Contrast Adaptive Sharpening (CAS)
FidelityFX Single Pass Downsampler (SPD)
FidelityFX Combined Adaptive Compute Ambient Occlusion (CACAO)


cBKgslM.jpg
 
This is where developer facing tools become more important than the technobable of specs. If a middleware developer embraces a feature like ML for certain features (upscaling, deformation, etc) then it can be more easily supported across a wider swath of games. If Microsoft provided a ML upscaler with their development tools that was easily implemented, that would probably have a high adoption rate as well. If it's relatively easy and offers tangible benefits to image quality or performance, developers will use it. Nothing except PS4 pro had hardware for CBR (well, PS5 now as well), but there were still a bunch of games that used it. And PS4 pro has a fairly small install base.
True, but alot goes into ML. There's the training which would specific to each console or GPU. So of "X" games released on XSX, PS5 and PC, then it would require Nvidia training and Microsoft training, and I'm not sure just how expensive or involved that would be.
 
If (big if!) the xsx has some powerful hardware accelerated machine learning advantage, that'd be very significant for a dlss-like situation, where you need to do a lot of ML calculation but it defeats the purpose if you're competing with your own renderer for gpu time.
Actually no it doesn't defeat the purpose.
That's like saying using CBR or TI defeats the purpose because it's using the same hardware to do the resolve.

If the net result is a better overall image in <= in the same time budget then its better use of the hardware.
Don't forget you would be rendering the image at a lower resolution (the saving), then using inference to upscale it (the cost).
 
True, but alot goes into ML. There's the training which would specific to each console or GPU. So of "X" games released on XSX, PS5 and PC, then it would require Nvidia training and Microsoft training, and I'm not sure just how expensive or involved that would be.
No you wouldn't need new models, if you need an example MS demonstrated that by using Nvidia's upscaling models in their Direct ML presentation, that can run on any gpu.
 
Exactly. The question is if the PS5 has lower precision int8 and int4 to accelerate ML. We know the PS5 has FP16, and that gives half precision.
At this point unless Sony clarifies otherwise, the PS5 is lacking the same hardware changes the XSX has for lower precision.

This is a logical fallacy. If the int8/4 hardware are present in all AMD GPUs, then likely also in the PS5.

It's not for Sony to confirm all hardware features, they haven't done that for generations.
 
This is a logical fallacy. If the int8/4 hardware are present in all AMD GPUs, then likely also in the PS5.

It's not for Sony to confirm all hardware features, they haven't done that for generations.
It isn't present in all AMD GPUs. Not sure where you get that from.
There are a number of things present in RDNA 2 GPUs that are not present in both the XSX and PS5.
The PS5 has RDNA 1 ROPs for instance. It lacks Mesh Shaders, VRS and SFS that is in both XSX and RDNA 2 GPUs.
The PS5 is a Custom GPU, with some features of RDNA 1 and RDNA 2 cards.
Please show me where Sony has said their PS5 has both int4 and int8 abilities.
Don't show me an AMD card with it. It means nothing.
 
if they are lucky enough to use the same models then you can have a closer comparison on performance i think. But I’m not sure if IP restrictions around models will be a factor. Just thinking out loud. If MS or Sony deploys their own algorithms for upscaling or Aa; it will not show up on the competing platform.

From AMD's wording, it seems to me that they want FSR to be adopted by everyone: PC game developers, XBox developers and Playstation developers. Mass adoption of an IHV-independent well-performing upscaling technology is probably the only way to kill off DLSS2, and for that they need multiplatform devs working on both consoles.



New RE8 Demo lists the following technologies in use for PS5:
FidelityFX Contrast Adaptive Sharpening (CAS)
FidelityFX Single Pass Downsampler (SPD)
FidelityFX Combined Adaptive Compute Ambient Occlusion (CACAO)

I've seen some reports/rumors of Sony sending word to their 1st party studios to adapt their tools to make it easier to port their games to the PC at a later date. Perhaps what this means in practice is them adopting the FidelityFX suite as well (and doing so Sony style: not making a fuss about it).
 
This is a logical fallacy. If the int8/4 hardware are present in all AMD GPUs, then likely also in the PS5.

It's not for Sony to confirm all hardware features, they haven't done that for generations.

Likewise just because something exists in a PC GPU does not mean it exists in a console GPU. Console GPUs are semi-custom, meaning that they can and do differ from their desktop counterparts in not only what is included (extra features) but what is excluded. Why? Die space is expensive. Each semi-custom customer may decide that the transistors for X feature would be better spent for something else or in extreme cases not worth the additional transistor (die space) cost.

It may be there it may not be there. Sony not saying it's there isn't evidence that it's not there. But at the time time, it existing on a different platform (PC) doesn't mean that it is there.

One camp obviously uses the former as evidence that it's not there, which is a false assumption. The other camp uses the latter as evidence that it does exist, which is another false assumption.

Neither camp is terribly helpful in actually trying to tease out whether X feature exists or not. And both camps actually hinder the process by claiming X feature exists or doesn't exist anytime someone actually trying to determine it is there or isn't there puts forth a hypothesis based on how a game is rendering something.

This is, of course, complicated by the fact that hardware support of something doesn't mean that T implementation can or can't be done. Just about anything (RT included) can be done with and without hardware support. Specific hardware support just makes it faster.

VRS can be done with and without specific hardware support for VRS. Implementation of an ML model can be done with and without specific hardware implementation.

On PC, we'd have an easier time teasing out this information if no manufacturer information is available as we can do apples to apples comparisons between 2 different hardware implementations. On consoles this isn't something we can do, so the task becomes far more difficult.

The only things we do know as a fact
  • XBS has specific hardware support for packed lower precision INT formats.
  • XBS has specific hardware support for some form of VRS.
What we don't know as fact: whether the PS5 has specific hardware support for any of that. And anyone claiming (as fact) that it does OR does not is just speaking out of their behind.

So, for example, the fact that Spiderman: MM has implemented an ML model isn't proof that PS5 has hardware support for packed lower precision INT formats. As that could be implemented even if the PS5 has absolutely no support for any INT format other than single issue 16 bit INT and single issue 32 bit INT. Likewise, there's nothing anyone can say that would point to it NOT having hardware support for packed lower precision INT formats.

Even if we had an XBS version to compare it to, we couldn't necessarily deduce whether it does or does not have support.
  • IF XBS ran significantly faster, that might be evidence that PS5 may not have hardware support for packed lower precision INT.
    • However, this could also just mean that the rendering used is more conducive to wide/slow compute versus narrow/fast compute and nothing to do with packed lower precision INT support.
    • Or, the title could be pushing CPU and GPU equally and thus the PS5 is hitting the power limit and thus hitting a power limit before the XBS hits a thermal limit.
    • Or, the title is far more optimized for the XBS.
    • Or, some other thing.
  • IF both ran roughly the same or it was faster on PS5, that might be evidence that PS5 may have hardware support for packed lower precision INT.
    • However, since this is multiplatform, this could also just mean that Capcom didn't put in any additional code to use the packed lower precious INT formats on XBS
    • Or, that the ML model they use doesn't benefit from packed lower precious INT.
    • Or, the title is just far more optimized for the PS5.
    • Or, some other thing.
Basically, all of us talking about it are the equivalent of Dogs (color blind) discussing whether two objects in front of us is Blue or Not Blue. Some helpful human walked by and said, the right one is Blue so we know it's Blue. However, no one has walked by to tell us whether the left one is Blue or Not Blue. Now, how do we as Dogs that can't see color determine if the left one is Blue or Not Blue? :)

Discussing whether it's there or not is fine. Stating that it is or is not there as a fact? That's just wrong.

Regards,
SB
 
And yet PlayStation is the only current console platform that actually has machine learning being performed on a commercially available game.

There's more evidence of it being there than it being absent.
 
True, but alot goes into ML. There's the training which would specific to each console or GPU. So of "X" games released on XSX, PS5 and PC, then it would require Nvidia training and Microsoft training, and I'm not sure just how expensive or involved that would be.
Well, nVidia now have a generalized process that doesn't require training for every game. So that would leave hypothetical training for XSX, PS5, and PC. But I would assume that training done for an AMD GPU on XSX would also apply to RDNA2 GPUs on PC since they are both using DX12U. That would maybe leave PS5 out, but.... I think the point of the Fidelity FX suit is that you have compatibility within that suite. As in, if you train an AI for Fidelity FX's upscaling, that training model would work across all GPUs that support Fidelity FX upscaling. And that assumes a need to train for every game. Or that AMD's upsampling solution requires, or even uses ML.

Even if we go with the assumptions that it's ML based, and that PS5 has a deficiency in ML performance, and that you have to train per game, that doesn't mean that it can't perform the upscaling, nor does it mean that upscaling from a lower resolution with the overhead of upscaling would be less performant than native rendering. Which means that it's still a useful tool and developers will use it if it's easy enough to implement and performs well.

Also, chaining together a bunch of assumptions is a guarantee for success. Trust me, I looked it up on reddit.
 
And yet PlayStation is the only current console platform that actually has machine learning being performed on a commercially available game.

There's more evidence of it being there than it being absent.

and crysis remaster has RT on xbox one x even without RT capable hw. This dosent prove anything, PS5 may have dedicated hw support for ml or it may not. Right now we only know that is capable of running ml.
 
And yet PlayStation is the only current console platform that actually has machine learning being performed on a commercially available game.

There's more evidence of it being there than it being absent.

That is evidence of ... a possibility. It could be done in compute. It could just be using single issue INT 32 or INT 16. It could be using packed INT 16 which would be an evolution of PS4 Pro's dual issue FP 16. It could mean there is single issue INT 4 or INT 8. Or it could mean there is packed INT 4 or INT 8. All of those are possibilities and to claim that it is definitely proof of any one of those would be irresponsible.

I know you stated it as a hypothetical that it may point to it having something other than just dual issue INT 16. That's true. But it's just as likely that it does not have anything other than dual issue INT 16. BTW - we don't even know if it has dual issue INT 16. It may, it may not.

Just like if a title never came out with support for a ML model on PS5, that wouldn't be proof that it didn't exist. Although if a title came out that used a ML model on XBS but not on PS5, that could indicate that PS5 didn't have support. But even then you couldn't say that the PS5 definitely had no support, in that case, it could be that it wasn't fast enough for the developer on PS5 or API/SDK support for it wasn't easy to implement, or some other thing prevented it's use.

But yes, it is evidence that it could be there. It is not evidence that it is there.

Regards,
SB
 
From AMD's wording, it seems to me that they want FSR to be adopted by everyone: PC game developers, XBox developers and Playstation developers. Mass adoption of an IHV-independent well-performing upscaling technology is probably the only way to kill off DLSS2, and for that they need multiplatform devs working on both consoles.
possibly yea, path of least resistance I guess. But if either makes something internally for their own use that will outperform FSR from AMD, I suspect they will be using that. There is still an outside chance that a middleware/nvidia will license their models for use.
 
And yet PlayStation is the only current console platform that actually has machine learning being performed on a commercially available game.

There's more evidence of it being there than it being absent.
Xbox's auto HDR uses machine learning. That means every Xbox One game running on Series consoles are commercially available games that use ML.
 
Xbox's auto HDR uses machine learning. That means every Xbox One game running on Series consoles are commercially available games that use ML.

Yes and it looks wonderful *ahem*

I agree with @Silent_Buddha when he states that we simply don't know. So the constant claims suggesting it's absent should really drop until more solid information are available. Especially since ML has been demonstrated in a practical manner on the platform.
 
we do lots of ML on FP16, FP32, and FP64 as well.
This idea that these ML models must be int8/int4 encoded needs to burn in a fiery pit. Running a ML model is evidence of nothing. I can run ML models on my 2006 macbook if I wanted to.
There desperately needs to be more focus on the models themselves as opposed to the hardware that runs it. The models are ultimately determinant if you can run them in real time. No real time ML inference model ever requires a super computer to run. So it's the model that needs to run fast, not the hardware. Many of our edge IOT devices that have very little power also run ML algorithms, hint, they are designed to do so.
In the same way, any algorithm, be it ML or not, when asking it to run 60-120fps, there are a lot of hoops you need to go through, and any poor call to memory, crappy latency, lack of instructions, misuse of cache will cause poor performance.

Having int8/int4, mixed precision dot products is not going to make these types of problems go away.

ML is often associated with needing a lot of hardware power because training is often what is talked about when thinking about needing as much horsepower as possible. Running the model as fast as possible is a completely separate thing from training.

I hope that helps. Let's all just agree that it doesn't matter. Int8/Int4/Mixed who cares. The only thing you should care about is whether a model even exists to run; if you're running any model, that's the console is doing ML there and then. We will never know if PS5 has int8/int4 support unless it is leaked.
 
Last edited:
I think that the argument is not what hardware do you need to do ML, but what kind of hardware enables you to do it in a way efficient enough to run it alongside gaming code.
 
Back
Top