Support for Machine Learning (ML) on PS5 and Series X?

Yeah when I say good idea I mean how they respond under pressure as a human driver does; variance in lines, maybe being abrupt with inputs, over driving and thus changing up pit strategy or adaptive pit strategy based on others or stepping up under pressure. These are just some examples.

I believe AI doing perfect laps that complement the programmed physics model isn’t the issue. We basically get that with a variance for laptimes (difficulty) today and an error ratio attached. It’s junk.
Deep learning AI is trained. Meaning to get it to behave, you just need data from the people you want it to emulate.

What you describe is significantly simpler than winning at GO which has more possibilities than the number of atoms in the universe ;) And Alpha GO has not lost.

They can do it; they just need to want to invest to do it.
 
lol. updated the post above.

Making the AI I think would be trivial (I honestly expect to see this in Forza 8 running on compute on XSX)

Game design around the AI requires more nuance, building an AI that people enjoy is the challenge.


^^ more difficult with a situation where the goal is to win without destroying your vehicle ;)
Catering to making the experience exciting to a player is a different animal. Forza 8 won't have as nearly as many inputs, they can just push the AI to the extreme limits.

F1 2019 AI is already great!.
 
The most interesting thing about alphago zero for me was the eventual refinement of generic self play and learning algorithm. They were able to teach go from scratch without providing any data sets by the neural network just playing against itself. Self learnt version beat the previous version 100-0 when they played proving the worth of self play. The generalized self learn algorithm worked out of the box for chess too and produced ridiculously strong chess engine. I don't remember if that version still required the rules to be thought or was it learning from literally zero. Self play algorithm is very generic and applicable to wide variety of things.
 
The most interesting thing about alphago zero for me was the eventual refinement of generic self play and learning algorithm. They were able to teach go from scratch without providing any data sets by the neural network just playing against itself. Self learnt version beat the previous version 100-0 when they played proving the worth of self play. The generalized self learn algorithm worked out of the box for chess too and produced ridiculously strong chess engine. I don't remember if that version still required the rules to be thought or was it learning from literally zero. Self play algorithm is very generic and applicable to wide variety of things.
it has to be taught rules. There is still a generic AI in there that determines which moves are available and how many look ahead moves. It's a single policy of the three. However the look ahead is not too large, compared to other AIs. It's quite limited.
 
it has to be taught rules. There is still a generic AI in there that determines which moves are available and how many look ahead moves. It's a single policy of the three. However the look ahead is not too large, compared to other AIs. It's quite limited.

Why I wasn't certain was that I remember a podcast with the alpha zero dude where he talked about learning without any given rules. But I just don't remember it that was some new stuff he is working on or if it was already applied in past. His team had applied self play without knowing rules to old games. The algorithm learnt to play the games very well by trying completely randomly and seeing the score in the end. Basically learning the rules through trial and error.

edit. I guess it becomes a bit semantic that is enforcing legal inputs and giving a score a rule or not. My perspective was that the dnn doesn't know but someone from outside says you won/lost or you can't do that, try something else. In this way the engine can try moving pieces in incorrect ways and can learn the rules by trial and error.
 
Why I wasn't certain was that I remember a podcast with the alpha zero dude where he talked about learning without any given rules. But I just don't remember it that was some new stuff he is working on or if it was already applied in past. His team had applied self play without knowing rules to old games. The algorithm learnt to play the games very well by trying completely randomly and seeing the score in the end. Basically learning the rules through trial and error.
That's part of the process of reinforcement learning - I suppose you could train an AI from this level, but I'm unsure of it's value here since rules are static and you need to be told when your moves are invalid anyway. So either the game tells you that move is invalid, or your are programmed with the rules. A simple way to train this way is to have a second AI just check moves and respond accordingly on validity.
 
That's part of the process of reinforcement learning - I suppose you could train an AI from this level, but I'm unsure of it's value here since rules are static and you need to be told when your moves are invalid anyway. So either the game tells you that move is invalid, or your are programmed with the rules. A simple way to train this way is to have a second AI just check moves and respond accordingly on validity.

Agree. I believe the researchers wanted to generalize so the algorithm would learn the rules by outside party saying that's not ok, that is ok, this is your score. That way the learning algorithm can stay same even if rules change as the rules can be learnt with the help of something outside the algorithm.
 
Fascinating subject going into next gen. Similarly to Microsoft Azure network using Series X GPU for deep learning, here are Sony's patent to that regard

Using a cloud gaming network to train a neural network. shorturl.at/hrUZ9

Training a neural network to play games using reinforcement learning. shorturl.at/copuy
 
I'd like to see this kind of stuff in games.
I agree but I reckon it's more effort to implement something like (for arguably little gameplay gain) than than sophisticated fakery. I mean, there are only so many sitting on, leaning on, picking up, putting down, stepping over, ducking under scenarios you can squeeze into a game, and I wonder is this would free up a huge amount of level designers time.

I am often reminded of Naughty Dog showing how Uncharted and The Last of Us pathing, AI and animation work where levels have to be stout specifically. Toss that out (and saving that time) and tossing in a more freeform system, is the saved time now spent stopping the freeform system from doing things you don't want? Similarly I think the latter Assassin's Creed games has some great and natural looking interactions between PC, NPC and environment.

I love watching this kind of stuff.
 
I agree but I reckon it's more effort to implement something like (for arguably little gameplay gain) than than sophisticated fakery. I mean, there are only so many sitting on, leaning on, picking up, putting down, stepping over, ducking under scenarios you can squeeze into a game, and I wonder is this would free up a huge amount of level designers time.

Might be useful for a game like Hitman where you try to blend in with the environment :?:

I suppose it'd be more interesting with a destructible environment where the debris/obstacles are entirely dynamic.
 
Last edited:
Might be useful for a game like Hitman where you try to blend in with the environment :?: I suppose it'd be more interesting with a destructible environment where the debris/obstacles are entirely dynamic.

Agreed. But is this a better use of time/resource than the fakery that goes on now, where most devs would use have NPCs stop at on obstruction, then when you look away, warp them to the other side. I'm always impressed with games that do proper companion/NPC traversal. It does like like every time somebody makes something smarter to solve esoteric problem X, they create complex problems Y and Z to replace them.

Don't get me wrong, I very much in favour of more realistic, more immersive and frankly better NPC behaviour, I only question whether the effort that goes into it in any way negates faking it. It look Havok a long time to convince devs to use their tech instead of just rolling their own, or badly faking physics and that's probably a more universal problem in modern complex worlds. :runaway:
 
I don't think PS5 has Machine Learning at this point. MS made custom changes to the GPU to allow this, and ML is a big part of DX12 Ultimate, so unless Sony also made the exact same changes as MS did, and Sony also had a ML bent with their PS5 API, then I doubt they do.
 
I don't think PS5 has Machine Learning at this point. MS made custom changes to the GPU to allow this, and ML is a big part of DX12 Ultimate, so unless Sony also made the exact same changes as MS did, and Sony also had a ML bent with their PS5 API, then I doubt they do.

Laura Miele(EA) talked about ML for PS5 inside the second Wired article. If PS5 has no ML, the third-party studios will not use it in multiplatform games.

https://www.wired.com/story/exclusive-playstation-5/

"I could be really specific and talk about experimenting with ambient occlusion techniques, or the examination of ray-traced shadows," says Laura Miele, chief studio officer for EA. "More generally, we’re seeing the GPU be able to power machine learning for all sorts of really interesting advancements in the gameplay and other tools."
 
Laura Miele(EA) talked about ML for PS5 inside the second Wired article. If PS5 has no ML, the third-party studios will not use it in multiplatform games.

https://www.wired.com/story/exclusive-playstation-5/
You would expect that if PS5 doesn't have it then multiplatform games won't either, but what about if the devs are using Direct X Ultimate which has ML in it? If the DX API makes it easy to add it in to the PC and XSX version, why wouldn't they just do it?
 
You would expect that if PS5 doesn't have it then multiplatform games won't either, but what about if the devs are using Direct X Ultimate which has ML in it? If the DX API makes it easy to add it in to the PC and XSX version, why wouldn't they just do it?

There is nothing easy to add like this. This is not magic, the GPU is only the easy part with the inference. You need to invest and train an AI to do the job.

And they will do it if it is on all platform or if they have an incentive like a co marketing contrat.

If RDNA2 GPU don't have the functionality, it probably means it is not on PS5 GPU.
 
AMD still has ways to go to match nvidia in traditional gaming performance before worrying about ML and RT. I’m sure they will check both of these boxes with rnda2 but their main focus will be to match nvidia at the high end in traditional gaming. So unless MS or Sony are doing something unique on their own, don’t expect too much here. The int4/8 performance of the XSX compared to a low end 2060 should be a good indication of this already.

Keep in mind that nvidia is comfortably ahead and still has a node shrink to take advantage of. They have the luxury of adding new tech and can use the experience in the enterprise space to downstream tech also.
 
Back
Top