Support for Machine Learning (ML) on PS5 and Series X?

Yes of course. But what I am trying to understand is the part where the Insomniac dev claims their real-time ML inference does not change current game performance.
I don't think we can conclude from this statement if they use fp16 or int8/4 for ml inferencing.
 
Yes of course. But what I am trying to understand is the part where the Insomniac dev claims their real-time ML inference does not change current game performance.
I think with respect to this particular type of enchantment, you'd run this as an async call because you want your vertices planned out before you pass it through for rendering. whereas the other forms of ML like super resolution etc, are part of the rendering pipeline - so you can't evade it's costs.
 
It can not dramatically change game performance because dynamic resolution can help off-set that fixed processing time.
The dev specified they run as fast as without it so it shouldn't modify dynamic resolution. Are we starting to doubt the developer own statements?
We're running as fast as the non-physically based version and did not require any adjustments to the graphical settings.
 
The dev specified they run as fast as without it so it shouldn't modify dynamic resolution. Are we starting to doubt the developer own statements?
he didn't say anything wrong.
Consoles games tend to have buffer since they need to floor to a frame rate. If SM MM normally runs at 65-67 fps uncapped, it might now be closer to 62-64 for instance.
No change on settings, doesn't necessarily mean there wasn't an impact, just not drastic enough to cause them to have to change settings up.
 
he didn't say anything wrong.
Consoles games tend to have buffer since they need to floor to a frame rate. If SM MM normally runs at 65-67 fps uncapped, it might now be closer to 62-64 for instance.
No change on settings, doesn't necessarily mean there wasn't an impact, just not drastic enough to cause them to have to change settings up.
or normaly runs 65-67 and using ml 67-68 ;)
 
"run as fast as without it" also doesn't sound as performance drop ;)
Yea, it doesn't. I still think it's an asynchronous call, so it's not part of the actual rendering pipeline, so I think it can slot itself to do work here when the CU's aren't being used.
 
Yes of course. But what I am trying to understand is the part where the Insomniac dev claims their real-time ML inference does not change current game performance.
Yea, it doesn't. I still think it's an asynchronous call, so it's not part of the actual rendering pipeline, so I think it can slot itself to do work here when the CU's aren't being used.


It's probably a very light load on the hardware.
It obviously won't be calculating deformation on a per-triangle basis. I bet it's calculating deformation on a lot less than say a hundred bodies (or "triangle conglomerates"), and if we go by the musculoskeletal simulation suites out there (e.g. Anybody Tech and OpenSIM), they operate with a low-pass filter of 7Hz, which is 140ms or once every ~9 frames if we think about 60FPS rendering.

Memory footprint and bandwidth should be almost irrelevant, and so is the processing on the GPU. Even less so if it's e.g. inferencing with 8bit output values and 4bit weights.

I also doubt it's lowering the render resolution.
 
Also remember this is replacing the old method, not in conjunction.
So it could possibly fit within the same budget or close enough to say no performance impact.

As for the comments on MS I saw.
I like the fact that MS talks about things.
Once people come to terms with the different philosophIies they will also understand that Sony not talking about something doesn't mean their not doing / has it.

As @seb said they all can do ML, just about performance profiles.
 
Yea, it doesn't. I still think it's an asynchronous call, so it's not part of the actual rendering pipeline, so I think it can slot itself to do work here when the CU's aren't being used.
Yup, a natural evolution for using available compute. Last generation was more about shifting traditional CPU tasks to GPU, sandwiching them in-between traditional rendering tasks - which was kind of important given last generation console CPUs. I think we will see compute in this particular type of way more and more this generation where devs blue sky uses for available compute that can improve aspects of the game in marginal, sometimes imperceptible ways.
 
So if a studio was going to use the ML abilities of the XSX, and it would require training to be done by super computers, who facilitates that? Would the studio have to do that and pay for it, or would Microsoft as the platform owner take that on from their end?

How does it work with Nvidia and DLSS?
 
So if a studio was going to use the ML abilities of the XSX, and it would require training to be done by super computers, who facilitates that? Would the studio have to do that and pay for it, or would Microsoft as the platform owner take that on from their end?

How does it work with Nvidia and DLSS?
nvidia pays.

Training is done by the company that wants ownership over the IP. It's unlikely a small studio will spend money to train their own algorithms, it's just easier to outsource that.
 
  • Like
Reactions: Jay
Depends what the ML is doing.
ML upscaling:
implemented as a part of unreal, then it would be - Epic
Xbox for use in playfab - MS

But there's other ML that could be done by studios or even middleware. So it just depends. ML isn't one thing even though it's been talked allot about in regards to upscaling due to dlss.
 
I wouldnt be surprised if middleware developers start to add ML accelerated capabilities to their offering, now that there is somewhat of a baseline with platforms supporting ML (xbox and playstation of course, but also PC now has ubiquitous support with directML, which allows ML to run either in a hardware accelerated fashion, if its available, or through a CPU fallback)

As an example Havok could add ML based physics simulations to their physics engine, which would be guarenteed to run on all modern platforms, or even something as singular as Havoks cloth simulation
 
This is just an example of using FP16, and compared to running on cpu only its much faster, but would be slower than running on an RTX using tensor cores is my guess.
This could also explain why their adding reduced precision to even lower model chips, AI/ML is going to be used more for varying things as time goes on.
 
Yes of course. But what I am trying to understand is the part where the Insomniac dev claims their real-time ML inference does not change current game performance.

Take a standard deformation technique that takes up some arbitrary time and replace it with an ML solution that takes up the same time but provides more robust deformation. You end up with no affect on the frame time but a much better level of deformation.
 
nvidia pays.

Training is done by the company that wants ownership over the IP. It's unlikely a small studio will spend money to train their own algorithms, it's just easier to outsource that.

You can very easily use any of the large cloud compute providers to train an ML model and that’s not necessarily going to set you back a lot of money either.

The biggest challenge really is mostly just getting a good dataset to train on, but one developer and a few hundred dollars can already go surprisingly far. So it’s more a matter of having the idea to do it and the means to generate the data (which can also just be an algorithm).
 
Back
Top