Support for Machine Learning (ML) on PS5 and Series X?

IMO, it should also persist. In something like GTA V, almost everyone in the city would likely know about what you did and immediately report you to the police and run and hide whenever you came around. The news would spread via, TV, radio, print, social media, etc. People with cell phones would be able to take your picture and spread it around.
You're talking about a specifically-designed gameplay mechanic, which can be easily programmed, I am talking about an unintended consequence of machine learning should you have self-adapting algorithms applies to NPC behaviour. This is also risk with machine learning algorithms that adapt to data/stimulus but often without understanding the data.

They even ran into some of the issues people here have been raising, with the AI behaviour becoming somewhat of an issue in FM6, because they were driving too much like actual human players (ie, complete assholes - they eventually added some controls to artificially limit drivatar aggression, to stop them just running everyone off the road).

I don't know if Forza's avatars are machine learning but this is a good example of an unintended consequence of adapting to stimuli that the algorithm programmes cannot foresee at the time of writing. And every single example of machine learning cocking up is the same.
 
So, I was combing through MS patents again, and this one popped up, which seems renegade to this thread:

Machine learning applied to textures compression or upscaling

Methods and devices for generating hardware compatible compressed textures may include accessing, at runtime of an application program, graphics hardware incompatible compressed textures in a format incompatible with a graphics processing unit (GPU). The methods and devices may include converting the graphics hardware incompatible compressed textures directly into hardware compatible compressed textures usable by the GPU using a trained machine learning model.
 
So, I was combing through MS patents again, and this one popped up, which seems renegade to this thread:
Some really creative methods they have going on here.

I’m not sure if this is just a patent or if they actually have something working. Would love to see this work.

this coupled with the texture up-resolution at runtime, starting to feel like creativity in this field is absolutely worth the silicon investment in a larger chip and include some tensor cores,
 
This has come up on here a couple of times but... Remember back in February when James Gwertzman from Playfab(owned by Microsoft) said this?
James Gwertzman said:
You were talking about machine learning and content generation. I think that’s going to be interesting. One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled-up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it.

From here: https://venturebeat.com/2020/02/03/...t-generation-of-games-and-game-development/2/

Kinda seems relevant to that patent.
 
Last edited:
Trading compute power for bandwidth. May be an interesting choice. Wonder how much it costs to upscale a 2048x2048 texture to a 4096x4096. I'm assuming you'd only ever upscale to your mip0 (4k or 8k) textures and any mip1, mip2 etc would be streamed off the disk. I can't see upscaling more than 1 mip level.
 
Trading compute power for bandwidth. May be an interesting choice. Wonder how much it costs to upscale a 2048x2048 texture to a 4096x4096. I'm assuming you'd only ever upscale to your mip0 (4k or 8k) textures and any mip1, mip2 etc would be streamed off the disk. I can't see upscaling more than 1 mip level.
Fairly massive savings I think.

4194304px upscaled to 16777216px
starting at 4x bandwidth savings if we're looking at transfer rate.

So if you pull in a tile at 2048x2048 say mip1, but now you get close enough to that tile that it needs to adjust to mip0. You don't need to recall mip0 from disk, you just compute mip0, save that result and use that one. That skips the bandwidth requirement entirely for I/O and can be done via async compute.
 
Last edited:
There was a lot of mud thrown at Nvidia tensor cores, but could end up being a huge benefit, that just wasn't visible at the start.
Not just talking DLSS2.0,but DX12 ML makes use of them also.

It's just a shame that we don't really know how much performance is required for all this ML stuff and how much real world benefit the reduced precision on xsx gives.
Even dedicating a 1TF could end up being a decent net gain overall.
 
I'm glad to see it evolve. Those RTX cards could really see a massive hey day as it's usage continues forward.
 
Fairly massive savings I think.

4194304px upscaled to 16777216px
starting at 4x bandwidth savings if we're looking at transfer rate.

So if you pull in a tile at 2048x2048 say mip1, but now you get close enough to that tile that it needs to adjust to mip0. You don't need to recall mip0 from disk, you just compute mip0, save that result and use that one. That skips the bandwidth requirement entirely for I/O and can be done via async compute.

But what's the cost on the gpu side. That's what I'm curious about.
 
But what's the cost on the gpu side. That's what I'm curious about.
Depends on the type model, the depth and size. The more you attempt to do and the higher the accuracy, the larger the network, thus longer processing time.

It can be fairly intensive I suspect and someone will need to look at the maximum number of textures that need to be computed per frame. If it's 1 tile/texture perhaps there is more than enough compute and bandwidth. If it's 100/1000 tiles changing over from mip1 to mip0, suddenly power is going to matter a lot more.
 
I don't remember any of that. All I remember is a dumb panda/monkey/lion who'd keep shitting in the villages and eating stuff he wasn't supposed to. It'd be nice to see the idea developed in a style that actually works. TBH I think I'd like to see a resurgence of god-games with ML driving stuff like minions, peeps, and pets so they actually could be trained.

I'd like to see a god game where you have to control your own ML AI driven Peter Molineux and keep him for confusing a media interview with an internal brainstorm meeting.
 
Depends on the type model, the depth and size. The more you attempt to do and the higher the accuracy, the larger the network, thus longer processing time.

It can be fairly intensive I suspect and someone will need to look at the maximum number of textures that need to be computed per frame. If it's 1 tile/texture perhaps there is more than enough compute and bandwidth. If it's 100/1000 tiles changing over from mip1 to mip0, suddenly power is going to matter a lot more.

Using sampler feedback to predict upcoming changes, and using SFS to seamlessly blend in any changes which missed the current frame due to being "over budget", might be a good way to distribute the workload over time. Same way it can distribute loads from the SSD over time.

It'd pretty cool is SF / SFS could be used for both paging texture tiles from SSD over time and ML upscaling over time.

Infact ... I wonder if you could use sampler feedback to indirectly gauge when to load higher lod models in. For example, if you're about to need a higher lod texture on a model, you might also know you'll need a higher lod model too. And if you're running a dynamic resolution system, and/or allow for different target resolutions (e.g. PC or XSX vs Lockhart), maybe such a tied-together LOD system could automatically adapt based on whatever the resolution happened to be at any given time. Tune it once for an optimal pixel size vs detail level, and just let it do its thing wherever it's doing it.
 
Trading compute power for bandwidth. May be an interesting choice. Wonder how much it costs to upscale a 2048x2048 texture to a 4096x4096. I'm assuming you'd only ever upscale to your mip0 (4k or 8k) textures and any mip1, mip2 etc would be streamed off the disk. I can't see upscaling more than 1 mip level.

Luckily, the XSX has lots of CUs to throw at it.
 
There was a lot of mud thrown at Nvidia tensor cores, but could end up being a huge benefit, that just wasn't visible at the start.
Not just talking DLSS2.0,but DX12 ML makes use of them also.

It's just a shame that we don't really know how much performance is required for all this ML stuff and how much real world benefit the reduced precision on xsx gives.
Even dedicating a 1TF could end up being a decent net gain overall.

I've said it before and I'll say it again: invest in tensor and RT cores.
 
How much of the install size would be shaven off if a game ships with low-res textures instead of high-res textures?
 
How much of the install size would be shaven off if a game ships with low-res textures instead of high-res textures?
Was just thinking they could on XO family ML back to gpu format texture during download. So smaller quicker downloads. Install size the same.

On Xbox Series hardware, install remain as jpg and ML done in realtime. So will then have a much reduced install size also. Saving the precious SSD space.
Could even do the highest ones during download on Lockhart, if theres latency or TF deficiency doing it in real time.

Probably only happen for the odd 1P game for a long time though.
Tune it once for an optimal pixel size vs detail level, and just let it do its thing wherever it's doing it.
This all sounds good, here's hoping.
 
Last edited:
Was just thinking they could on XO family ML back to gpu format texture during download. So smaller quicker downloads. Install size the same.

On Xbox Series hardware, install remain as jpg and ML done in realtime. So will then have a much reduced install size also. Saving the precious SSD space.
Could even do the highest ones during download on Lockhart, if theres latency or TF deficiency doing it in real time.

Probably only happen for the odd 1P game for a long time though.
This all sounds good, here's hoping.
There's no ML that will convert Jpeg to BCn, I read that patent wrong... rather I didn't read past the description.
 
There's no ML that will convert Jpeg to BCn, I read that patent wrong... rather I didn't read past the description.
I can't remember the details either, too be fair.
But the important bit is from a non gpu texture format to a supported one.
I'm talking about the download package, install size, runtime and when the inferencing could be done.
 
I can't remember the details either, too be fair.
But the important bit is from a non gpu texture format to a supported one.
I'm talking about the download package, install size, runtime and when the inferencing could be done.
hmm indeed.

this is why I hate patent diving. There are so many, and so many related ones.
This patent was linked for instance:
http://www.freepatentsonline.com/y2019/0304138.html

and I thought it was the same as this patent here:
https://patentscope.wipo.int/search....wapp2nA?docId=US253950223&tab=PCTDESCRIPTION

and I started writing about the second one, in reference to the first one. Looked like a bumbling idiot. So yea, I get where you're going. I'm not going to read either, this stuff is mentally exhausting.
 
  • Like
Reactions: Jay
and I started writing about the second one, in reference to the first one. Looked like a bumbling idiot. So yea, I get where you're going. I'm not going to read either, this stuff is mentally exhausting.
Imagine trying to be a judge and decide what patents are and aren't infringed! If it were me, I'd disqualify all patents for being written in gobbledegook! If its not obvious what you're patenting, you aren't really patenting anything.

Patent culture has completely corrupted the idea into opportunistic obfuscation.

Maybe ML can be purposed to translate patent speak into real language?
 
Back
Top