AMD FidelityFX on Consoles

Discussion in 'Console Technology' started by invictis, Mar 9, 2021.

  1. mr magoo

    Newcomer

    Joined:
    May 31, 2012
    Messages:
    193
    Likes Received:
    322
    Location:
    Stockholm
    what this have to do with anything discussed here?
     
    PSman1700 likes this.
  2. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    4,634
    Likes Received:
    2,120
    Buying up studios and have everything multiplat instead of greedy exclusives might be a way to kill off Playstation.

    Perhaps, just like with dolby atmos, the PS5 had to have this exotic audio solution no one outside first party studios will utilize (and have a less impressive effect to boot).

    PS4 is doing ray tracing in some games, it has hardware featured ray tracing, case closed.

    Indeed, in the same vein, using ML for a subtle thing like muscle deformation on a character as opposed to reconstructing an image from 1080p all the way to 4k is a total different thing.
     
    mr magoo likes this.
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,182
    Likes Received:
    16,038
    Location:
    The North
    running ML alongside game code as in parallel? I think only tensor cores can do that.

    If you mean being able to run ML inference at fast enough speeds, the main attributors to making models run faster are the size of the model and any sort of feature engineering that needs to be done prior to running it through the model. If none is required, that is ideal. Then you're just looking at the algorithm it's running and how many layers there are for it to provide an acceptable return. Once you figure out the absolute minimums, you can choose to encode it or mix encode it to lower precision and see if you lose any more accuracy.

    And this is where int8 would come in. And if for some crazy reason you can encode it further down to int4 and not lose any accuracy, then go for it.

    Afterwards, it's all about the number of available ALU you have to run the network. More available threads means you can process larger networks faster (not true necessarily) - so if you have for some reason, and I'm just throwing random numbers, if you wanted to transform a 1080p image into a 4K image, your input threads will be 2.0M pixels long. Your output will be 8.3M pixels out. To run the later layers of the NN in a single shot (all threads being processed at the same time), requires you to have 8.3M threads available. I'm not even talking about the calculations that are going to happen right, just threads. And even maxing out the most number of threads may not be ideal for performance, but that's a different topic.

    So basically you're going to have to run multiple cycles per layer to get through all those pixels before you're allowed to move onto the next layer. NN are serialized in this fashion, there are many nodes in a layer that needs processing, but you can not process the next layer without the previous layer being completed. So having wide GPUs and bandwidth will matter in this type of thing. Then you can start factoring in int8/int4 encoding and mixed precision processing to improve your performance further.

    This is the way deep learning works, but this isn't how all machine learning algorithms work. Without knowing what the model is (it may not be a NN) and without knowing what encoding schemes are being used, if any at all, there may be no advantages that XSX holds over PS5 except in available ALU and bandwidth to feed them because not all ML algorithms will be leverage those features (int4/8 and mixed precision dot products)
     
    Jay, fehu, Moik and 3 others like this.
  4. invictis

    Newcomer

    Joined:
    May 28, 2013
    Messages:
    104
    Likes Received:
    63
    So is there some proprietary IP for the training?
    I mean if Microsoft is using its super computers to do the training, why would they then want or allow the results to be used to maybe help say a PS5 game out?
    Same with Nvidia. If they are using their super computers to do all the hard lifting for ML, why would they allow that to help AMD cards out?
     
  5. invictis

    Newcomer

    Joined:
    May 28, 2013
    Messages:
    104
    Likes Received:
    63
    As others have pointed out, you can do ML on your smart phone.
    The question is about if the PS5 has int8 and int4 lower precision.
    It had half precision FP16 on the PS4 Pro, and its not a hard guess to think it has so again on the PS5.
    You can use FP16 for ML and have an advantage over using full precision.
    With the XSX we have Microsoft saying in record that they ADDED the hardware required to get int8 and int4. They have outlined the specs for that. The fact they said they added it tends to indicate that it wasn't a stock feature. They didn't say they added hardware to get Ray Tracing on the Series X, because it was a stock RDNA 2 feature.
    We have David Cage saying that the XSX is going to be better at ML than the PS5 because its shader cores are more suited for it. I wonder what ever he could mean by that....
    https://www.google.com/amp/s/wccfte...-powered-shader-cores-says-quantic-dream/amp/

    You also have the PS5 engineer say that the PS5 didn't have the ML added into it.

    And let's not forget, Microsoft has a track record of highlighting features on the XSX that the PS5 lacks. From pointing out that their SOC was fixed clocks to Digital Foundry before it was announced by Sony that theirs was variable, to highlighting VRS, Mesh Shaders and SFS for the XSX knowing the PS5 lacked them. MS has also highlighted the int8 and int4 capabilities of their console.

    Sony has also highlighted things that it has over the XSX such as Cache Scrubbers and the faster SSD.

    We spent 6 months hearing from Sony fanboys that the PS5 had VRS, Mesh Shaders and SFS because it was an RDNA 2 feature, and as the PS5 was RDNA 2 it must have them by default. They held on for grim death with that until it became overwhelming obvious that it wasn't the fact. We still have hang outs that thing the PS5 has VRS because of the VR patent and that Primitive Shaders and the GE on the PS5 are the same as Mesh Shaders, even when AMD had the GE and Primitive Shaders in their older GPUs.
     
  6. see colon

    see colon All Ham & No Potatos
    Veteran

    Joined:
    Oct 22, 2003
    Messages:
    2,059
    Likes Received:
    1,144
    Just because you do or don't care for a result doesn't mean that it's not relevant to the conversation regarding machine learning on consoles.

    I was assuming that AMD would do the training, since we are discussing "AMD Fidelity FX On Consoles" and not anything specific the Xbox or Microsoft. So we can add that assumption as another link in the assumption chain for guaranteed success. But it is possible that Microsoft have a specialized solution that leverages Series hardware. And again, if it's easy enough to implement, and offers performance or IQ benefits, developers will use it.
     
    mr magoo and PSman1700 like this.
  7. vjPiedPiper

    Newcomer

    Joined:
    Nov 23, 2005
    Messages:
    118
    Likes Received:
    72
    Location:
    Melbourne Aus.
    Sorry, I hope this isn't too OT...

    But my understanding of the Auto-HDR feature,
    It is simple a custom SDR - HDR transform applied on a game by game basis, and that while it is TRAINED by ML,
    The training and ML is simply to get the right values for the colourspace conversion, that forms the SDR ->HDR transform.

    Whats more, i think, this happens in the Output block of the Series consoles, and not strictly within the GPU itself.
    Which is very different to what we are talking about regarding using "traditional" GPU resources to implement an AI inference based up-scaling method.

    I actually dont know how much separation there is in the current gen consoles between the GPU silicon and the "Output Block" ( which i would consider to include the hdmi encoder).

    I know in the past i have speculated that by combining the use of GPU based upscaling and intelligent work in the output block, might leed to the best possible result for any upscaling system,
    This would be simply due to the powerful and flexible nature of the output block on series consoles.
     
  8. ThePissartist

    Veteran Regular

    Joined:
    Jul 15, 2013
    Messages:
    2,030
    Likes Received:
    995
    You're a broken record, @invictis. I suggest dropping this persistent rumour of yours, it's really going nowhere other than round and round in circles.

    It's happened several times before now in this very thread and frankly it's boring to repeatedly go through it every few pages.

    You realise the majority of people on B3D don't care for the platform warring and bias? You own an Xbox, good for you - they're good consoles. Just drop the boring "Microsoft has X feature, Sony does not". You've done it before, and again, and again. We heard you, it's acknowledged that there may be differences in the ML implementation.

    Let's now check out how the different implementations pan out.
     
    ToTTenTranz and Shortbread like this.
  9. invictis

    Newcomer

    Joined:
    May 28, 2013
    Messages:
    104
    Likes Received:
    63
    Yeah AMD could take the lead on that for sure.
    But we still have AMD doing the training for their GPUs, Nvidia for DLSS, no doubt Intel will have some sort of ML on their new cards as well.
    But note, this is me asking questions when I'm not aware of just how much is involved to do the AI training on a game. I am assuming its quite complex, otherwise every game would have DLSS upscaling as a boost.
    I am then also assuming the way Nvidia goes about their training might well be different to how AMD would do it.

    So with all those assumptions, I may well be off the track.
     
  10. invictis

    Newcomer

    Joined:
    May 28, 2013
    Messages:
    104
    Likes Received:
    63
    What rumour are you talking about? Missed that.
    If you are referring to lower precision hardware in PS5, there's no rumour about that?
    We know MS added it to the XSX. Sony has yet to state they have it. Devs have come out and said the XSXs shader cores are more suited to ML than the PS5, which they wouldn't do if they both contained the same tech now would they. The only word on the matter from anyone at Sony was one of their engineers who said it didn't have the ML stuff on it.
    The best people like you say is that some AMD cards had it as well.
    If you don't like the discussion about what features are or arnt in the consoles, maybe don't quote people and challange them on it?
    Its an opt in system really.

    Most people on B3D don't system war that's correct. Most like to get to the truth about what a system does or can do. Nowhere did I say the XSX is a better machine than the PS5 because it has it, infact you can read where I said I don't think anything will come of the ML features on the XSX. I don't think it will be adopted in any real form, and maybe, just maybe, one of MS own studios might play around with it.
    I'm the worst Xbox fanboy in the world talking like that.

    My point about the Sony Fanboys was that just like the ML, they held on to the PS5 having VRS for instance because it was a RDNA 2 feature, so it must be on the PS 5. The reality is that neither the PS5 nor XSX are full RDNA 2. They are both custom chips. That doesn't mean one is better than the other. From what I understand, there was no performance gains on RDNA 2 vs RDNA 1. The changes were around RT, VRS, Mesh Shader and SFS additions, and improving power efficiencies.

    If it makes you feel better, I'm happy to talk about Cache Scrubbers and how they are on the PS5 and not on the XSX, and I would love to dig down and find out just what these feature will mean, and what sort of performance advantage the PS 5 can expect from it.

    There is no point talking about tech if you want to stear away from.talking about any advantages it gives. It's the whole reason why Nvidia, Sony, AMD and Microsoft introduce these features into their products, and consumers like us should enjoy talking about exactly how these changes work.
     
    mr magoo and PSman1700 like this.
  11. BoardBonobo

    BoardBonobo My hat is white(ish)!
    Veteran

    Joined:
    May 30, 2002
    Messages:
    3,535
    Likes Received:
    462
    Location:
    SurfMonkey's Cluster...
    By inference, the dogs were discussing whether one, or more, of the blocks were blue. A human stops and says the one on the right is Blue. So by inference the left one is not Blue otherwise the human would have said they are both Blue and not just picked out one.

    I'm not disagreeing with your point, just saying being OCD about the example :D
     
  12. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    Nvidia doesn't and won't share it.

    AMD would, simple example is fidelity fx, its open source and can run on Nvidia cards also.

    Some of the reaons behind the difference in approach is where they are in the market.
    AMD due to market share needs as much uptake and usage of their tools as possible, as using Nvidia toolkit locks them out of it and leaves them at a disadvantage.

    In regards to MS they can approach it a few different ways, or all.
    • Make it free for use with DX12U, this would lockout Vulcan and PS5 and give additional reasons for a studio to use DX.
    • Put it in something like playfab. So that can be used on Vulcan and PS5 and the more people use features from playfab the more likely they are to use other features that it has to offer like azure, basically end up upselling.

    Playfab is an MS suite that can be used by anyone including Sony.

    Models being able to run everywhere doesn't mean its free to.
     
    #272 Jay, Apr 23, 2021
    Last edited: Apr 23, 2021
  13. see colon

    see colon All Ham & No Potatos
    Veteran

    Joined:
    Oct 22, 2003
    Messages:
    2,059
    Likes Received:
    1,144
    While I don't think nVidia isn't going to continue working on DLSS, the current implementation does not require per game training. I think they use their market position and marketing deals to leverage developers into supporting it. We don't know if the AMD solution will require per game training, but recent comments from them hint that FidelityFX Super Resolution might not even use ML.

    I'm pretty interested to see how Microsoft's solution is different than AMD's solution. We know Microsoft is working on ML projects for DirectX, and we know AMD is working on an upscaling solution to compete with DLSS, and obviously they partner in plenty of projects. But people talk as if Fidelity FX Super Resolution is not going to be a separate thing from a DirectML upscaler evangelized by Microsoft. I think everything Microsoft has shown about their solution was some Forza footage, and IIRC it was running on nVidia hardware.
     
  14. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,297
    Likes Received:
    4,735
    Location:
    Pennsylvania
    I'm sure there are many developers that simply don't want to spend the time incorporating DLSS into their game, or their game engine has issues that prevent DLSS integration. There's probably several reasons why most games don't have DLSS beyond technical as well as we're not really privy to Nvidia's requirements. It makes sense though for as many developers integrate it as possible, provided it's worth their time, and their game actually benefits from it (ie. the game's graphics are demanding at higher resolutions)

    A solution by AMD would likely suffer worse than DLSS on PC for widespread use, even if it's open.
     
  15. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    I personally would hope AMD and MS are working independently, and I kind of expect that to be the case.
    Different solutions and options are good.
    AMD non ML based, MS ML based.

    Why do you think this?
    Are you talking about if AMD's solution is ML based also, even so I'm unsure why you would have that view.

    They would need to go the same route as Nvidia and get it incorporated into unity and unreal.
     
  16. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,297
    Likes Received:
    4,735
    Location:
    Pennsylvania
    Nvidia have 80% market share and a large install base of Turning/Ampere GPUs, yet DLSS, even without the need to train per game anymore, has a relatively small amount of integrations. Nvidia also historically have far larger dev relations than AMD. Assuming the need for RDNA1/2 for an AMD solution, I don't see why suddenly we'd see a lot more adoption of an open solution when adding in a relatively very small addition of RDNA-based GPUs into the potential user base.
    Yeah, definitely this should be the big priority at the start.
     
  17. chris1515

    Legend Regular

    Joined:
    Jul 24, 2005
    Messages:
    6,176
    Likes Received:
    6,536
    Location:
    Barcelona Spain
  18. Seanspeed

    Newcomer

    Joined:
    Apr 23, 2021
    Messages:
    12
    Likes Received:
    8
    Frustratingly not mentioning FidelityFX VRS that could have ended much of the debate here. lol
     
  19. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    4,634
    Likes Received:
    2,120
    Would be very confusing if PS5/XSX etc didnt support those FidelityFX features.
     
  20. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    How so?
    Could implement a software version. :runaway:
     
    PSman1700 likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...