Support for Machine Learning (ML) on PS5 and Series X?

@eastmen No, you don't need the cloud to enable DLSS. It runs locally on your Nvidia gpu. The training is done by nvidia, but the DLSS program ships with the game.
I understand that once Nvidia does the cloud computing it is then done locally on the nvidia gpu by using the tensor cores. However it sounds like the hard part is all in the cloud before consumers have access too it.

So whats so special in the tensor cores that AMD can't emulate it with other hardware ?
 
I understand that once Nvidia does the cloud computing it is then done locally on the nvidia gpu by using the tensor cores. However it sounds like the hard part is all in the cloud before consumers have access too it.

So whats so special in the tensor cores that AMD can't emulate it with other hardware ?

The tensor cores are designed to do particular matrix math operations for ML very efficiently.
 
The tensor cores are designed to do particular matrix math operations for ML very efficiently.
didn't DF bring up microsoft adding more ML hardware to the xsx ? can that not be enabled to do the same thing ? Or is it something we need to know more about before we can make a judgement call ?

DLSS 2 is not limited to any resolution currently.
so when nvidia enables it for a game it can now be used at any resolution ? Does it not have to be trained for different resolutions anymore ?
 
For example ...

Yes (!) if Sony is remotely interested in game streaming (Didn't they partner with Microsoft in this area ?)

:)

Base IP or not, the design is done and proven. Feature is well supported and used everywhere. Take it ~

Int 8 and Int 4 are supported by RDNA2, it's quad rate Int 8 and hex rate Int 4 that MS specifically had AMD customize for their SOC. So, PS5 likely has support for Int 8 and Int 4 unless Sony removed it (I can't think of any real reason they would have requested this), but unless Sony also had AMD add custom modifications to support quad rate Int 8 and Hex rate Int 4, the PS5 likely doesn't have those.

As good uses for ML in games is still up in the air, it would be a gamble for Sony to have had those customizations added to the PS5 SOC. OTOH - Microsoft plans to use the XBSX SOC in Azure clusters, so ML is a plus for them regardless of whether increased rate Int 8 and Int 4 find significant use in games.

MS Game Studios have been experimenting with ways to leverage ML. But the only thing that has thus far been mentioned to the public is using a ML trained model to upscale low detail textures into higher detailed textures. Basically using it as an additional form of texture compression.

Regards,
SB
 
didn't DF bring up microsoft adding more ML hardware to the xsx ? can that not be enabled to do the same thing ? Or is it something we need to know more about before we can make a judgement call ?

...

Just in TOPS alone, the Series X is way behind an RTX2060. I think it's about half for INT8 and INT4. And it's about half the TFLOPS of FP16 on the tensor cores. On top of that, tensor cores should have other efficiency advantages by being designed for the use case of ML.
 
DLSS 2 is not limited to any resolution currently.
so when nvidia enables it for a game it can now be used at any resolution ? Does it not have to be trained for different resolutions anymore ?
It does.
DLSS the algorithm when we train it itself is not locked to a particular resolution. Once you're done training it, it's locked to what you set it to. You just gotta make new models for different settings.

You can make a generic resolution one, but it's going to be more costly. I don't think DLSS2.0 is generic inputs. It's just very unlikely.
 
So whats so special in the tensor cores that AMD can't emulate it with other hardware ?
Tensor operations, or tensor based silicon (marketing name, tensor just means m x n matrix). It's silicon whose main advantages are to run deep learning models, given how they work. When it comes to other ML algorithms, they aren't as efficient, and any other compute mechanism will probably be better to use.

In this sense, tensor cores are more locked than compute.
And in the same way, FPGAs are faster than GPUs when it comes to deep learning.
ASICS are faster that FPGAs.

The more increasingly locked it is, the faster they run.
But models are changing _all_ the time, compute provides the ultimate flexibility in running all sorts of different types of algorithms, int4/8/ FP16 helps speed things up. But tensorcores are more specific and run a particular ML algo (Deep learning Neural Networks) faster.

Google is the first company to develop tensor cores and brand it. But it's built on ASICs. Anyone can make a tensor core, but you're reserving silicon for it. it can't do everything. So if it's not doing deep learning then....
 
Beside procedural animation, there's also 'ML performance beating simulation' with fluids and smoke, like this:

Really don't know how to judge performance for something like this.
This is really nice. For me it seems a very promising application of ML for games.
But skimming the paper i did not find something about runtime performance, did they mention? Can this run on CPU?

I worked on this myself a lot, but it was all physics simulation. I got walking ragdoll, but having natural behavior and doing actions like carrying stuff or sitting down requires more work.
However it's clear most performance is spent in physics engine which i used as constraint solver also for the motors. I got the conclusion i could have roughly 10 living ragdolls in a game on some old quad core i7 920, but could be more because physics engine became faster since then.
The biggest plus is that i don't require motion capture data at all, but not sure if i could reach such quality as shown. I'm optimistic, but... 2 years of work at least.

---

However, personally i'd be most interested in procedural details for game worlds. Obviously for offline content creation, but even more for compression purposes on the runtime client.
Imagine to create games without instancing the same modular models and textures over and over. Assuming we have powerful procedural tools to assist, i guess it would make building game worlds faster, more easy and less restricted. And it could solve the illusion breaking seams of texture and geometry we have everywhere currently.
Also, as i'm working on automated lod generation for the whole static world and texture space shading (not sure yet any of this is practical), it's worth to mention all of this makes instancing harder and less attractive than it currently is.

So there is motivation to get rid of instancing and memory budgets for both technical and artistic reasons. Assuming we could make this work, the final limitation becomes storage space. Because Rage was not very detailed when looking close.

Could ML be used on the client to generate detail from sparse given data, like low res textures and material hints? Probably yes.
Too bad i feel to old to learn about ML. :) I see some alternative options, but likely that's much harder to do...
 
Nvidia has shown this ai rendered world. It's intresting to see all the research results coming out from different places

 
How feasible would it be to use tensor cores for AI? Example; in racing games most competent drivers go online because the AI is terrible and not a challenge. It’s certainly not adaptive to changes in its strategy and close racing is very predictable.

Could something like tensor cores process AI faster and allow it to speed up its adaptability throughout race?
 
How feasible would it be to use tensor cores for AI? Example; in racing games most competent drivers go online because the AI is terrible and not a challenge. It’s certainly not adaptive to changes in its strategy and close racing is very predictable.

Could something like tensor cores process AI faster and allow it to speed up its adaptability throughout race?
Yes.
It could do this fairly easily. Racing games are actually very practical example for deep learning AI and the tensor cores would be very good at processing it.

Forza Motorsport 5+ is already setup to do this; as they record player vectors for driving the track to create AI models of each player. They only need to make a better DL version of drivers people actually want to race against.
 
You’re hired.
lol. updated the post above.

Making the AI I think would be trivial (I honestly expect to see this in Forza 8 running on compute on XSX)

Game design around the AI requires more nuance, building an AI that people enjoy is the challenge.


^^ more difficult with a situation where the goal is to win without destroying your vehicle ;)
Catering to making the experience exciting to a player is a different animal. Forza 8 won't have as nearly as many inputs, they can just push the AI to the extreme limits.

 
Last edited:
I don't believe there is less need for people in game development in future. The nature of the job might be different in future though. Manual labour is another matter and yuval noah harari has some interesting thoughs like the concept of useless class and how society might have to change/how societies might react.

I view this as a really exciting time to be alive. Better than gold rush, steam engine, gas engine and airplanes combined. From 8bit machines to 32bit machines to primitive 3d graphics(psone), programmable shaders, ray tracing, machine learning,... What an amazing time to be alive and witness the rapid progress.
 
lol. updated the post above.

Making the AI I think would be trivial (I honestly expect to see this in Forza 8 running on compute on XSX)

Game design around the AI requires more nuance, building an AI that people enjoy is the challenge.


^^ works well with a situation where the only goal is to absolutely win without destroying your vehicle ;)
Catering to making the experience exciting to a player is a different animal.


Yeah when I say good idea I mean how they respond under pressure as a human driver does; variance in lines, maybe being abrupt with inputs, over driving and thus changing up pit strategy or adaptive pit strategy based on others or stepping up under pressure. These are just some examples.

I believe AI doing perfect laps that complement the programmed physics model isn’t the issue. We basically get that with a variance for laptimes (difficulty) today and an error ratio attached. It’s junk.
 
Back
Top