Sony PlayStation 5 Pro

Where does it use AI/ML?

There were rumors that Spiderman 2 would use ChatGPT for AI conversations, but this was never verified, and I'm pretty confident that even if it were the case, they wouldn't be real-time ChatGPT conversations, simply the studio using the application to quick generate lots of simple dialogue offline, that they then put into the game after.
muscle deformation

 
muscle deformation

Ah that's right, I remembering hearing about that.

I'd guess a lot of this work was done offline, and while perhaps some of it might still be real time, the fact that they could introduce this in a patch without performance degradation suggests it'd have pretty limited GPU resources needed for it.

So unless this ML animation technique is just near magic and doesn't actually cost performance, more likely it's simply not requiring much extra work for a GPU that doesn't have the kind of INT8 acceleration we know Xbox has, meaning a lot of this is coming from offline work.
 
Ah that's right, I remembering hearing about that.

I'd guess a lot of this work was done offline, and while perhaps some of it might still be real time, the fact that they could introduce this in a patch without performance degradation suggests it'd have pretty limited GPU resources needed for it.

So unless this ML animation technique is just near magic and doesn't actually cost performance, more likely it's simply not requiring much extra work for a GPU that doesn't have the kind of INT8 acceleration we know Xbox has, meaning a lot of this is coming from offline work.
There is no ML being done at runtime IIRC. They used ML in the development process.
 
No, because FSR 3.1 probably uses AI with INT precision.
Let’s wait until June to see if that claim holds. It’s not quite clear what fsr3.1 is.

But decoupling frame Gen from far3.1 means it could be used with pssr etc.

Has frame Gen been used on any console titles yet ?
 
there is a tech used in the meta quest 2 and 3 using AI frame prediciton to generate frames, it can be used for a minimum of 36fps and creates 36 more frames to reach 72fps, but it create weird warping of geometry if used too heavily in fast moving scenes.
 
Imo it will be 100 more than PS5. So $599 in US, £579 in UK etc. Same as it was for PS4 Pro. Anything more than that is too much for a console refresh.

I somewhat agree. I think the 5Pro will be $599 in the US but that is for a Digital Edition, so in essence it will be $150 more than the PS5 Digital Edition at current pricing.
 
I somewhat agree. I think the 5Pro will be $599 in the US but that is for a Digital Edition, so in essence it will be $150 more than the PS5 Digital Edition at current pricing.
I think its going to be $499 for the digital edition, maybe max $549 for the Digital edition.
 
I agree. PS5 price showed the faithful were willing to spend top dollar. They can always drop the price if it doesn't shift, but they can't (so readily) up the price $100 is they find themselves below the ideal price curve. And Contemporary Sony have shown they aren't afraid of higher prices given PSVR2 and Portal pricing not being as low as many anticipated.
 
Let’s not forget the impact that covid had on people. That is all behind us now. People are no longer constrained to stay at home.
 
Let’s wait until June to see if that claim holds. It’s not quite clear what fsr3.1 is.

But decoupling frame Gen from far3.1 means it could be used with pssr etc.

Has frame Gen been used on any console titles yet ?
On PS4 since 2016 in AFAIK almost all 120hz PSVR games. From 60fps to 120fps.
 
On PS4 since 2016 in AFAIK almost all 120hz PSVR games. From 60fps to 120fps.
There was hardware involved in that.

Not quite the same as FSR3.

The thing is; it’s still not fully clear if it runs on PS5. Frame Gen requires a specific optical flow swap chain feature on dx12. And that isn’t any form of neural network.

So if you can program around it or have an equivalence in GNM I don’t see a reason ps5 can’t have FSR3.
 
there is a tech used in the meta quest 2 and 3 using AI frame prediciton to generate frames, it can be used for a minimum of 36fps and creates 36 more frames to reach 72fps, but it create weird warping of geometry if used too heavily in fast moving scenes.
Oculus definitely were one of the pioneers of 'reconstruction'. They called it 'timewarp/spacewarp' and there were definitely stages of progress in terms of what aspects they were able to combine temporally to improve, but it was very much a similar concept of using more advanced reconstruction elements for 'full image' enhancements.

Basically, they had frame generation like five years before Nvidia.

A great example of good minds + necessity being the mother of invention.
 
That's actually where I'm at. The problem is that there's too much grunt required. And I'm going to be very reductive here, so bear with me on this.

There will always be this sweet spot between clock speed, energy required, cooling required and silicon required.
And I don't really care about which IHV, but if we're having a serious discussion around getting 4090 levels of power into something of a form factor the size of a PS5, then we're talking about cramming all that power into something approximately the size of 350mm2 to 292mm2.

So let's assume to run CyberPunk PT, you need X levels of computational power, which we can call X, and right now that X, say is 4090 levels of computation. And you're looking at about 300-450W of power consumption on a 4090 to play to that game with PT on.

Now combine that with a CPU (80mm2), and shrink a 4090 (609mm2) combined into something around 350mm2. Think about all that power now being pushed into a very tiny area. And cooling to me becomes increasingly harder to do.
And for consoles to exist, they have to be at a very particular price point.

So when you consider the combination of heat, cooling, energy, silicon size, which the smaller it gets and the more computation we require of it, the energy rating of watts/mm2 is eventually going to so high, that we have no materials to cool it, at least nothing that will allow us to get it at console level pricing. And so the obvious answer, is to go wider with slower clocks to reduce the power requirement and to increase the die size therefore increase the cooling area, but now we're paying significantly higher per die due to silicon costs.

Thus regardless of AMD or nvidia, that's the issue that stands to me, is that there's a clear physics barrier that can only be overcome by significant cost increases. And the reason why PC continues to flourish is because we have more applications for this level of power (whereas consoles are dedicated as gaming machines only) and that also we're moving back to mainframe days where computation is moving to cloud so that it's cheaper for everyone to have access to it.

I just don't see how with the rate of how slow our node shrinks are coming, by PS6 we'll be able to fit that level of computation into silicon of 350mm2.

We could develop entirely new hardware accelerators, or come up with a way to use magnitude order less silicon to do the same amount of computation, but outside of that, by 2026/27, I don't think we'll be far enough in the node shrinks to make this happen. And even if we were far long enough, I don't think the cooling solution would be ideal to keep us at our current price point.

I like your reasoning here,
But really 609 + 80 ~ 700.
So we really only need to get a scaling of double the density or half the size to realize such a system in 350mm!

Also were talking about going from 7nm, so by the time we get to 2 or 3nm it might actually be possible.
 
I think there's also some things to consider from a more optimistic point of view going forward as well.

The RTX 4090 isn't really 609mm2 as a significant amount of the chip is disabled including a large amount of L3. I'd even suspect the amount of L3 enabled is not really "neccesary" from a practical sense if it were a purely 1 bespoke (or semi bespoke) pure gaming product.

There's likely room to improve the per transitior performance/capability of RT/PT in terms of both HW and SW going forward.

Granted there might be some cross gen limitations but that SW/HW synergy is also going to be more leveragable from a console game development stand point in the future as opposed to the current PC "bolt on" basically compromises.

We also still have the relatively low hanging fruit of increasingly leveraging ML and it's considerably higher pace of scaling.

So all in all do we need 76b transitors on a next gen console to achieve Cyberpunk 2077 RT OD equivalent (or even better) graphics? Maybe (probably) not.
 
Last edited:
I somewhat agree. I think the 5Pro will be $599 in the US but that is for a Digital Edition, so in essence it will be $150 more than the PS5 Digital Edition at current pricing.
Yup!

As I mentioned before...
I don't see a $499 Pro DE happening anytime soon. PS5 sales have slowed a bit, but haven't collapsed. The current slim PS5 DE is around $399-$449 in most stores, depending on bundle, and the OG model is still going for $499. If anything, PS5 Pro will be $599 (or more) and will be marketed towards the enthusiasts (who are willing to pay more), and not a replacement for the current two models which aims for everyone else.
 
Oculus definitely were one of the pioneers of 'reconstruction'. They called it 'timewarp/spacewarp' and there were definitely stages of progress in terms of what aspects they were able to combine temporally to improve, but it was very much a similar concept of using more advanced reconstruction elements for 'full image' enhancements.

Basically, they had frame generation like five years before Nvidia.

A great example of good minds + necessity being the mother of invention.

It's a completely different scenario and not really comparable. Occulus uses head movement tracking data to predict the next frame and I believe internally renders a wider field of view than the current viewport to provide data for the "generated frame". So it's not a fully generated frame, more like they just calculate the players predicted view point in an already rendered image. At least that's how I understood it.
 
It's a completely different scenario and not really comparable. Occulus uses head movement tracking data to predict the next frame and I believe internally renders a wider field of view than the current viewport to provide data for the "generated frame". So it's not a fully generated frame, more like they just calculate the players predicted view point in an already rendered image. At least that's how I understood it.
It's no completely different, is it? It's still frame generation. The technique is different from VR to non-VR as VR get more predictive data from headset. But I believe on console (non-VR Frame gen) could get much better than on PC with even lower latencies.
 
Back
Top