AMD FidelityFX on Consoles

:LOL: Great!

Are we certain it didn't happen before the patch? Perhaps lack of experimentation and experience with the various play modes before the patch. After all it seems like a high number of possibilities and combinations to go through in such limited time to be able to rule everything out.
yes
problems starts after patch, it took them like 2 months to release newer patch that should fixed it but problems still exist (less in resolution mode but still there)
 
Last edited:
problems starts after patch, it took them like 2 months to release newer patch that should fixed it but problems still exist (less in resolution mode but still there)

Ah, it came after the original patch, I was thinking it was only after the latest patch.
 
  • Like
Reactions: snc
Game Stack Live on April 20-21

https://developer.microsoft.com/en-us/games/events/game-stack-live/?OCID=AID3029149&

One of the talks

Denoising Raytraced Soft Shadows on Xbox Series X|S and Windows with FidelityFX (Presented by AMD)
With raytraced visuals bumping rendering quality even higher than ever before, a significant amount of fine-tuning is required to maintain real-time performance. A typical way to achieve this is to trace fewer rays, and to make sense of the noisier output that method delivers. This presentation will explain how the AMD FidelityFX Denoiser allows for high-quality raytracing results without increasing rays per pixel, and deep dives into specific RDNA2-based optimizations that benefit both Xbox Series X|S and PC.
 
https://videocardz.com/newz/amd-fidelityfx-super-resolution-fsr-to-launch-this-year-for-pcs
You don’t need machine learning to do it, you can do this many different ways and we are evaluating many different ways. What matters the most to us is what game developers want to use because if at the end of the day it is just for us, we force people to do it, it is not a good outcome. We would rather say: gaming community, which one of these techniques would you rather see us implement so that this way it can be immediately spread across the industry and hopefully cross-platform.

— Scott Herkelman
 
'Not necessarily based on machine learning

Interestingly it was mentioned that FSR may not actually be based on machine learning like NVIDIA DLSS is. This is actually not very surprising considering that RDNA GPUs do not have tensor cores, specialized cores accelerating machine learning computations. The Microsoft DirectML implementation of super-resolution does not need Tensor cores either.'

just some corrections to the article here because they use things interchangeably.
DLSS uses Deep Learning Neural Networks as the basis for their algo, specifically what Tensor cores are designed to accelerate computation for. All other forms of machine learning are typically done on compute.

I would argue that they are definitely going to be using machine learning at the very least, the other algorithms I'm not sure will come even close. Deep Learning networks are still the ideal algorithm for this purpose however.
 
'Not necessarily based on machine learning

Interestingly it was mentioned that FSR may not actually be based on machine learning like NVIDIA DLSS is. This is actually not very surprising considering that RDNA GPUs do not have tensor cores, specialized cores accelerating machine learning computations. The Microsoft DirectML implementation of super-resolution does not need Tensor cores either.'

just some corrections to the article here because they use things interchangeably.
DLSS uses Deep Learning Neural Networks as the basis for their algo, specifically what Tensor cores are designed to accelerate computation for. All other forms of machine learning are typically done on compute.

I would argue that they are definitely going to be using machine learning at the very least, the other algorithms I'm not sure will come even close. Deep Learning networks are still the ideal algorithm for this purpose however.

It is PR...they have done the same thing before...*sigh*
 
Just some corrections to the article here because they use things interchangeably.
DLSS uses Deep Learning Neural Networks as the basis for their algo, specifically what Tensor cores are designed to accelerate computation for. All other forms of machine learning are typically done on compute.

I would argue that they are definitely going to be using machine learning at the very least, the other algorithms I'm not sure will come even close. Deep Learning networks are still the ideal algorithm for this purpose however.

Scott Herkelman: "You don’t need machine learning to do it, you can do this many different ways and we are evaluating many different ways."

I think you should send an e-mail to Scott telling him and his teams to stop wasting time on evaluating non-ML approaches, then.
j/k


It matters very little how the article interchanges between DLNN with ML because AMD's boss of everything-Radeon didn't cut those corners. What matters are Scott Herkelman's own words, which point to ML possibly (and from his wording, most probably) not being used at all.

For example, RIS works pretty well for a 10-20% reduction in total resolution. However, it lacks any temporal information. What if AMD found a way to include temporal information for an algorithm that uses RIS as a base? That wouldn't use any ML.
 
My hope was it would be a solution not depending of devs implementing it, so it would have a better "usability" that DLSS, but it seems it's not the case. DLSS is just piercing now after years of nVidia pushing it, I don't see devs running to FSR...
 
I think you should send an e-mail to Scott telling him and his teams to stop wasting time on evaluating non-ML approaches, then. j/k
Using ML for anything is generally the fallback position. ML is what you use when humans cannot develop an algorithm to solve a problem. It's the sledghammer-to-crack-a-nut approach.

It's often easier but generally a disproportionately computationally-intensive solution. It's definitely not the future, it should be a stop-gap for most problems.
 
Scott Herkelman: "You don’t need machine learning to do it, you can do this many different ways and we are evaluating many different ways."

I think you should send an e-mail to Scott telling him and his teams to stop wasting time on evaluating non-ML approaches, then.
j/k


It matters very little how the article interchanges between DLNN with ML because AMD's boss of everything-Radeon didn't cut those corners. What matters are Scott Herkelman's own words, which point to ML possibly (and from his wording, most probably) not being used at all.

For example, RIS works pretty well for a 10-20% reduction in total resolution. However, it lacks any temporal information. What if AMD found a way to include temporal information for an algorithm that uses RIS as a base? That wouldn't use any ML.
Essentially though, in my time just perusing DirectML, DML is very much based around supporting machine learning tasks. In particular I was pointing out at the article here:
Interestingly it was mentioned that FSR may not actually be based on machine learning like NVIDIA DLSS is. This is actually not very surprising considering that RDNA GPUs do not have tensor cores, specialized cores accelerating machine learning computations. The Microsoft DirectML implementation of super-resolution does not need Tensor cores either.
Which are the writers words and not Scott's words - which he quoted below (Scott never mentions ML or DL in any of those posted quotes). To the writer: I would say that just because you aren't using Deep Learning necessarily, doesn't mean you aren't using machine learning, which I guess the assumption by the article, and not Scott. A lot of computer vision uses machine learning, but not all of them use deep learning (though that is the norm now for a lot of different tasks).

But there's a lot of different ways to cut a cake right. I'm not judging the solution as I don't know what it is. If they have a way to do super resolution without machine learning, I'm all for it - it very well could be RIS temporal solution which I'm totally okay with as well. At the end of the day by not restricting what people can do, that is where innovation will happen.
 
My hope was it would be a solution not depending of devs implementing it, so it would have a better "usability" that DLSS, but it seems it's not the case. DLSS is just piercing now after years of nVidia pushing it, I don't see devs running to FSR...
I don't think FSR being a driver toggle was ever on the table. AMD has always said this was being developed with developers, which they wouldn't need to if it could be a driver feature.
You do raise a good point about the time period needed between AMD talking about it and the first titles with it being released. Ideally, the first FSR-enabled title gets released (late) this year because it looks like some devs are already working with earlier versions of the tech, but I reckon it could take quite a bit longer than that.



Using ML for anything is generally the fallback position. ML is what you use when humans cannot develop an algorithm to solve a problem. It's the sledghammer-to-crack-a-nut approach.

It's often easier but generally a disproportionately computationally-intensive solution. It's definitely not the future, it should be a stop-gap for most problems.
Add to that the fact that AMD doesn't need to come up with a tech that justifies the cost and die area spent on dedicated and exclusive ML inference cores, because they don't have them.
 
Using ML for anything is generally the fallback position. ML is what you use when humans cannot develop an algorithm to solve a problem. It's the sledghammer-to-crack-a-nut approach.

It's often easier but generally a disproportionately computationally-intensive solution. It's definitely not the future, it should be a stop-gap for most problems.
That's not true. I would disagree with this, with some caveats in your favour.

There are particular functions that computers are just terribly designed to perform. natural language processing, computer vision (detection, classification etc), translation, and super deep strategy, think GO, chess etc. Those particular areas of research have been around by humans for 40-50 years easily. And regular algorithms have gotten no where close to the performance we can achieve with deep learning.

However on the flip side, using deep learning outside of those particular applications seems like overkill, where a generic algorithm or even a traditional ML algo is likely to be much faster and likely nearly as effective.
 
I don't think FSR being a driver toggle was ever on the table. AMD has always said this was being developed with developers, which they wouldn't need to if it could be a driver feature.
You do raise a good point about the time period needed between AMD talking about it and the first titles with it being released. Ideally, the first FSR-enabled title gets released (late) this year because it looks like some devs are already working with earlier versions of the tech, but I reckon it could take quite a bit longer than that.




Add to that the fact that AMD doesn't need to come up with a tech that justifies the cost and die area spent on dedicated and exclusive ML inference cores, because they don't have them.

You mean just like their DXR implementation...we all know how that performs (or not)...
The lag from "announcement" to any games using this really make me think NVIDIA caught them flat-footed of balance with DLSS.

One more sign of this is them trying to sound "different" than NVIDIA (hence they avoid talking about Deep Learning).

And since you mentioned "die-space....you do know the amount of die-space area used in Turing/Ampere right?

Then we can compare what the "effect" of converting that diespace to CUDA would do...or not do.

Instead of the "sour grapes"...
 
In particular I was pointing out at the article here:
Which are the writers words and not Scott's words - which he quoted below (Scott never mentions ML or DL in any of those posted quotes).
he clearly mentioned:
You don’t need machine learning to do it, you can do this many different ways and we are evaluating many different ways.
 
he clearly mentioned:
I don't have a problem with what Scott said. Because what he said is accurate.
I have a problem with the writer interpreting what Scott said, as writer implies you don't need tensor cores to perform a direct ML type super resolution. that's still machine learning, just not deep learning, and even then deep learning is still done on compute anyway.

Scott is basically saying there are many ways to approach super resolution, it doesn't have to be what nvidia has done.
 
Add to that the fact that AMD doesn't need to come up with a tech that justifies the cost and die area spent on dedicated and exclusive ML inference cores, because they don't have them.
Only matters if amd's solution is anywhere as good as DLSS2.
Regardless of the reasons DLSS was developed, it's doing a very good job.

When I read people say Direct ML is different than Nvidia's tensor cores, it just shows their lack of understanding.
Direct ML will also use tensor cores if its the more performant code path.

To be honest, it just sounds to me like AMD is saying "We're gonna try lots of different things, don't have a clue how we're going to try to do it yet"
 
I think it will be something like control dlss 1.9 dlss, not on pair with dlss2.0 but vey performance light. Question if it will be good enough to gives better results than for example dynamic resolution.
 
Only matters if amd's solution is anywhere as good as DLSS2.
Regardless of the reasons DLSS was developed, it's doing a very good job.

When I read people say Direct ML is different than Nvidia's tensor cores, it just shows their lack of understanding.
Direct ML will also use tensor cores if its the more performant code path.

To be honest, it just sounds to me like AMD is saying "We're gonna try lots of different things, don't have a clue how we're going to try to do it yet"
a bit extreme with the opinion. But I agree with the technical tidbits from your posts.

AMD may very well be or have been pursuing a non-ML based approach for sometime now and that's fine. It doesn't need to be better than DLSS2 to be useful; it just needs to provide a better output/performance ratio than the existing non-ML upscaling solutions. Otherwise we'd go back to temporal/checkerboarding reconstruction.

Though I'm not bullish about them being still in 'evaluation'. ML/DL type algos are many months of research and a significant use of resources to develop.

MS can probably get away with buying Nvidia models. lol it's their fastest method of getting there.
 
Last edited:
I would argue that they are definitely going to be using machine learning at the very least, the other algorithms I'm not sure will come even close. Deep Learning networks are still the ideal algorithm for this purpose however.
Which are the writers words and not Scott's words - which he quoted below (Scott never mentions ML or DL in any of those posted quotes).

Here's the video:


At 34min and 25 seconds, when asked about Fidelity FX Super Sampling, you will hear AMD's General Manager of Graphics Business Unit say, and I quote:

Scott Herkelman said:
What I could tell you, is that you don't need Machine Learning to do it.
You can do this many different ways, and we're evaluating the many different ways. And what matters to us is what game developers would want to use.


I'll agree that he didn't put ML out of the question, but it sure sounds like that's not the path AMD is taking for FSR. He also very specifically mentions ML.



To be honest, it just sounds to me like AMD is saying "We're gonna try lots of different things, don't have a clue how we're going to try to do it yet"
They definitely have a clue on how to do it IMO.
However, all their language so far points to them wanting to make sure it gets a very wide adoption, to the point of becoming a cross-platform, cross-IHV standard. And that's probably taking the blunt of their software development efforts, and why they need to be super careful to make it work.

The last thing AMD wants is to just present a competitor to DLSS2 with similar performance but on Radeon GPUs. That's how they lose because they don't have the same marketshare or weight on PC developers as nvidia.
I.e., only way they get this to become standard and make developers drop the nvidia-exclusive DLSS2 is if FSR works just as well on nvidia hardware (similar IQ, similar performance gains).

It's how Freesync won over GSync, or how RIS won over DSLL1. Or how Async and FP16 optimizations were mostly disregarded on the PC until Turing came out.
 
Back
Top