AMD FidelityFX on Consoles

I am very curious about this. DLSS requires Machine learning.
FidelityFX is the equivalent of DLSS and supposedly coming to both platforms. But we know the PS5 doesnt have ML. So how is it supposed to have it?
Depends what you mean by equivalent.
If you mean using ML to upscale, then no it may not be.
If you mean as another method of upscaling then yes it is.

Also ML can be done without tensor cores, or any features that are in xsx or RDNA2. It's just about the level of performance required to achieve the results needed. So not having it doesn't automatically rule it out.
 
On how they are doing it? same as other devs I presume. No NDA on the results. The results are on DF review of the spiderman PS5 review. The 1080p RT mode video I believe.

Oh well, thought you had some insider information regarding a tech that would match DLSS2. Clearly the spiderman '4k' doesnt match what DLSS2.0 is doing to achieve 4k regarding IQ and performance.
 
How about you send me some confirmation from Sony that the PS5 GPU has Int8 and Int4 precision then?
Wasn't PS5 said to be RDNA 1.1? If so, then it would appear to have INT8/4 support.

Screenshot_2020-11-18-RDNA-2-questions-areejs12-hardwaretimes-com-Hardware-Times-Mail.png


In any case, it is highly unlikely for PS5 as a next generation consoles to not support INT8/4. We also have to remember this is a custom chip, so of course many things would be different compared to RDNA1 on PC, like the geometry engine, Raytracing capabilities and machine learning support.
 
Wasn't PS5 said to be RDNA 1.1? If so, then it would appear to have INT8/4 support.

Screenshot_2020-11-18-RDNA-2-questions-areejs12-hardwaretimes-com-Hardware-Times-Mail.png


In any case, it is highly unlikely for PS5 as a next generation consoles to not support INT8/4. We also have to remember this is a custom chip, so of course many things would be different compared to RDNA1 on PC, like the geometry engine, Raytracing capabilities and machine learning support.
So what ml advantage has xsx over ps5 that David Cage was talking about if ps5 has also increased perf for int8/4 ?
 
Oh well, thought you had some insider information regarding a tech that would match DLSS2. Clearly the spiderman '4k' doesnt match what DLSS2.0 is doing to achieve 4k regarding IQ and performance.

What kind of IQ are you referring to? one where doesn't exist any movement like all the screenshots you see, or one where you see actual movement? I tell you what, It wasn't until I bought a 3080 and tried DLSS that I saw how flawed it was. The difference from what I saw online and what I saw in real life, was that all the defects were camouflaged by youtube compression and by most comparisons only using still images.

Lets wait and see, shall we.
 
So what ml advantage has xsx over ps5 that David Cage was talking about if ps5 has also increased perf for int8/4 ?

Maybe because the XSX GPU is stronger overall.

Just speculation, but perhaps XSX also has some aditional ML hardware that allows for example AutoHDR without any FPS loss. MS said XSX has 57 INT8 TOPS. However, when you calculate how much compute performance the shaders can output at INT8, that would be around 48 INT8 TOPS, so additional hardware to that would have to deliver these remaining 9 INT8 TOPS.
 
What kind of IQ are you referring to? one where doesn't exist any movement like all the screenshots you see, or one where you see actual movement? I tell you what, It wasn't until I bought a 3080 and tried DLSS that I saw how flawed it was. The difference from what I saw online and what I saw in real life, was that all the defects were camouflaged by youtube compression and by most comparisons only using still images.

Lets wait and see, shall we.
very interesting, in which games you wren't happy with dlss2 results ?
 
Maybe because the XSX GPU is stronger overall.

Just speculation, but perhaps XSX also has some aditional ML hardware that allows for example AutoHDR without any FPS loss. MS said XSX has 57 INT8 TOPS. However, when you calculate how much compute performance the shaders can output at INT8, that would be around 48 INT8 TOPS, so additional hardware to that would have to deliver these remaining 9 INT8 TOPS.
where did you find this 57 ?
DirectML – Xbox Series X and Xbox Series S support Machine Learning for games with DirectML, a component of DirectX. DirectML leverages unprecedented hardware performance in a console, with Xbox Series X benefiting from over 24 TFLOPS of 16-bit float performance and over 97 TOPS (trillion operations per second) of 4-bit integer performance on Xbox Series X.
https://news.xbox.com/en-us/2020/03/16/xbox-series-x-glossary/
97tops for int4 so 48.5 for int8
 
I tell you what, It wasn't until I bought a 3080 and tried DLSS that I saw how flawed it was.

DF sure was very impressed by the DLSS2.0 results, in both performance and image quality, with the latter more often then not matching and/or exceeding native resolutions. From what we have seen from PS5, it clear shows its yesteryears tech, hence the chase for better reconstruction tech, not to forget one thats available to more then the occasional exclusive.
 
The issue is not with DLSS and its problems. The issue is with marketing and "bro-science" being a replacement for truth, and how this rampant mentality is affecting AMD capacity to build you a competitive GPU because buyers are being driven away by nvidia mob mentality. Nothing has been seen about FidelityFX, but just look around how people ferociously defend any of nvidia established perceptions. Its dystopion, and the GPU market pricing you see today is just a glimpse into an all Nvidia future, if peoples intelligence keeps being evaporated by marketing and fanboy argumentation.
 
Nothing has been seen about FidelityFX

And that is the problem. From NV we can actually try and see it for ourselfs. DLSS, in special 2.0 does a really good job. It does have its flaws, but the advantages greatly outweigh them.
Seriously, most cant tell the difference between a reconstruced 1080p to 4k versus a native one, but the performance differences sure can be noticed. Seeing how much DLSS has improved in such a short time (from dlss1 to 2.0), it sure is a tech that cant be unseen anymore. Same for ray tracing.

AMD now wants to chase both RT and reconstruction tech but is behind.

a glimpse into an all Nvidia future, if peoples intelligence keeps being evaporated by marketing and fanboy argumentation.

Abit like Apple then? :p
 
And that is the problem. From NV we can actually try and see it for ourselfs. DLSS, in special 2.0 does a really good job. It does have its flaws, but the advantages greatly outweigh them.
Seriously, most cant tell the difference between a reconstruced 1080p to 4k versus a native one, but the performance differences sure can be noticed. Seeing how much DLSS has improved in such a short time (from dlss1 to 2.0), it sure is a tech that cant be unseen anymore. Same for ray tracing.

AMD now wants to chase both RT and reconstruction tech but is behind.



Abit like Apple then? :p

Its not a problem yet. Its not been announced for today. Its problem if when the next blockbuster RT game comes out and FSR is MIA. We are not there yet.

And being behind means little to nothing. Having companies chasing each others tech is nothing new. Its how we go forward. ATI was the first one to make a product with Unified Shaders - The future of GPU's - and Nvidia catched up real fast. It means nothing and surely does not warrant constructing arguments on why this Brand will always be worse than another. It just hurts the industry as a whole. And provides lots of crow eating. No one expected AMD to match the 2080ti and here we are with the 6900xt that matches the 3090.
 
The patent for Foveated Rendering is in relation to VR, not the PS5.
Guess where VRS comes from.

There was a Sony patent that showed the console having unified L3 Cache in the CPU
There's no Sony patent showing unified L3 cache in the CPU.


I'm not sure Nvidia go around putting out official statements on everything they do, but there's plenty of articles on them using Int8 and Int4 and the benefits they get over higher precision.
https://developer.nvidia.com/blog/int4-for-ai-inference/
You mean you couldn't find a single official statement from Nvidia proving your "We know DLSS uses INT4 and INT8" statement, which is why you linked to a page with zero mention of DLSS, upscaling or supersampling.



The RDNA white paper refers to Int8 texture sampling which has nothing to do Int8 for compute.
Yeah sure, just for "texture sampling" and "nothing to do with compute".
First paragraph of page 14 in the RDNA1 whitepaper:

Some variants of the dual compute unit expose additional mixed-precision dot-product modes in the ALUs, primarily for accelerating machine learning inference. A mixed-precision FMA dot2 will compute two half-precision multiplications and then add the results to a single-precision accumulator. For even greater throughput, some ALUs will support 8-bit integer dot4 operations and 4-bit dot8 operations, all of which use 32-bit accumulators to avoid any overflows.




Also, not all RDNA cards had int8 and int4 capabilities, it was only on some cards. It was not a stock feature found in all of them. That's the same situation with RDNA 2.
Changing goalposts from "Microsoft changed the GPU to get INT8/4 on RDNA2" to "it was on RDNA1 but not all of them".



which sent all the fanboys into a rage
(...)
The constant "push for victory" as you put it is to correct misinformation that was put out there.
(...)
You really have to be willfully ignorant at this point to continue thinking that the PS 5 has VRS.
Here it is, the trolling and flamebaiting.
Thank you for confirming my suspicion that you didn't create this thread to be clarified, but to post concern trolling on the PS5.
Bye then. This thread is useless.
 
Well, for me personally its in my intrest for AMD to keep providing heat to NV, as it will drive each company to create even more faster hardware and new tech, and finally more competitive pricing. Ive not either stated that AMD will never match NV or even exceed. Its just not happening this generation i think.
Also, i think theres less to gain in normal rendering then say ray tracing or other tech. A 6900XT does match a 3090, but once DLSS is considered, it falls behind alot. Also, the raw power/compute of ampere might show its side when things like UE5 or other engines that favor compute.

I really hope AMD, Intel and NV compete with eachother.

There's no Sony patent showing unified L3 cache in the CPU.

Yet it was all over the forums here and everywhere. Along with infinity cache and other zen3, rdna3 etc features and tech. Ended up being a Zen2/rdna1/2 hybrid. Meh.
 
There's different types of support.
Example:
PS4 Pro supported FP16 RPM
1X supported FP16
Didn't mean the 1X support was worthless as it did things like ease register pressure.

So depends what sort of support PS5 has for the reduced formats. May not get the high calculations throughout from it compared to xsx.
 
Back
Top