Playstation 5 [PS5] [Release November 12 2020]

Thermal throttling is likely there, as with any desktop processor even those with “guaranteed” fixed clock, but only for overheat protection. As long as the cooling system is sufficiently provisioned for the target TDP, thermal throttling would not kick in. The declared DVFS ranges are practically guaranteed, and the behaviour is as deterministic as defined by the power management algorithm.

That has been how both AMD and Intel desktop processors behave at least. People dizzing PS5 using DVFS over thermal throttling is equivalent to speculating the consoles maker deliberately skimping on cooling, when they know how much heat is needed to move away from their chip. It could happen, but then that would be an engineering screw-up. :s

I wonder if they'll need to clarify that claim to indicate that the console still needs to be given reasonable ambient temperature, before someone tries to test their PS5 in an oven.
Google brought up an operating temperature of 5-35C for prior Sony consoles.

I'm waiting to see whether there's specific modes or other ways of evaluating when a game hits a power budget limit.
While it may be deterministic from the point of view of the system, it's a deterministic system whose set of inputs is larger than previous generations.
Code that was once evaluated in terms of total amount of compute time, occupancy, or bandwidth contention would need to be evaluated in an expanded space of power consumption within the GPU and CPU, and the reasons for a given amount of consumption may sometimes be counter-intuitive.
It would be neat if there were tools that could give that kind of cost information, or ways of asserting a given load baseline during module testing instead of waiting for an unpleasant surprise after late integration.
 
The difference when comparing their calculation to the PC space is that PC max boost clocks are technically sustainable with proper cooling and environmental factors since the clock isn't deterministic.

If you assume proper cooling and environmental factors can enable technically sustainable max boost clocks, why would an embedded system differ in the possibility?

The PS5 is going to knowingly downclock under a given power load and hence would never be capable under any circumstance delivering the quoted compute.
This is your speculation with a vaguely defined "a given power load" (e.g. load mix?). Please also note that the specification embedded in the official blog did not actually claim to have both CPU and GPU sustained at the max clock.

It would be fine to say up to 10.28 TFLOPs if there was a scenario in which that was actually possible.
From a pure technical writing standpoint, as long as it can hit the clock momentarily, it does deliver the said "up to" rate.
 
I see this comment on Notebookcheck.
Cerny implied that for them was better to increase the clock of the GPU than using a larger GPU. But this test shows that 36 Navi CU overclocked 18% gains just 7% performance for 40% more power: https://www.techspot.com/review/1883-overclocking-radeon-rx-5700/
Yes, Navi 2 should be more efficient and may behave differently overclocked, but it's hard to believe.

But I understand that his aim with the variable clock was to allow the GPU to clock LOWER, not higher, to decrease noise.
 
If the net experience is better for users (ie. giving the best visuals) I don't see a need to quibble about some marketing speak.

However, if the goal is really just to hit a marketing inflection point and is of little benefit that would be an issue. I doubt the effort is worth it just for the marketing so I am interested to see how this pans out.
 
I see this comment on Notebookcheck.
Cerny implied that for them was better to increase the clock of the GPU than using a larger GPU. But this test shows that 36 Navi CU overclocked 18% gains just 7% performance for 40% more power: https://www.techspot.com/review/1883-overclocking-radeon-rx-5700/
Yes, Navi 2 should be more efficient and may behave differently overclocked, but it's hard to believe.

But I understand that his aim with the variable clock was to allow the GPU to clock LOWER, not higher, to decrease noise.
But dudnt he say something along the lines that the loss of performance will be tiny, alluding to a reverse relation of these numbers than what you suggest?
 
I wonder if they'll need to clarify that claim to indicate that the console still needs to be given reasonable ambient temperature, before someone tries to test their PS5 in an oven.
Google brought up an operating temperature of 5-35C for prior Sony consoles.

I'm waiting to see whether there's specific modes or other ways of evaluating when a game hits a power budget limit.
While it may be deterministic from the point of view of the system, it's a deterministic system whose set of inputs is larger than previous generations.
Code that was once evaluated in terms of total amount of compute time, occupancy, or bandwidth contention would need to be evaluated in an expanded space of power consumption within the GPU and CPU, and the reasons for a given amount of consumption may sometimes be counter-intuitive.
It would be neat if there were tools that could give that kind of cost information, or ways of asserting a given load baseline during module testing instead of waiting for an unpleasant surprise after late integration.

IIRC cerny said something in the line of : doesn't matter where you live, ps5 won't throttle tp heat.

I assume it will be like ps4 and PS4 pro. Sure it won't throttle but it will

1. Sounds like a jet taking off as it gets hotter
2. Show overheat warning and telling you to close the game
3. Ignore that warning, it will turn itself off (the fan will still be running for awhile even after your TV shows blank)

But cerny also touched their mistake with ps4 cooling. So I hope that means ps5 won't be as loud as it's predecessor.
 
He talked around it. He did not say absolutely it will only clock down a few %.
He did by implication. He implied that doing so could save 10% power. If it’s jumping that much in dynamic consumption, something bizarre is going on. They probably have a bit of hysteresis to prevent motorboating.
 
He did by implication. He implied that doing so could save 10% power. If it’s jumping that much in dynamic consumption, something bizarre is going on. They probably have a bit of hysteresis to prevent motorboating.
It makes sense if the slope of the voltage is steep at the top, which it should be. Basically squared of voltage diff multiplied by frequency diff.

If they need a 4% increase in voltage for a 2% increase in clock at 2.23...
1.04^2 * 1.02 = 10%

The higher clock should add a linear increase by itself because there is that much more switching from the transistors. And the voltage is needed for both increasing the slew rate and to compensate the rail drop from it's impedance at really high currents. Maybe not enough metal for the current too, causing more losses. This gets crazy exponential.

The alternative is to run it at 800MHz with crazy high efficiency, but Cerny seems to have expressed a newly found preference for high clocks. :runaway:
 
Last edited:
It makes sense if the slope of the voltage is steep at the top, which it should be. Basically squared of voltage diff multiplied by frequency diff.

If they need a 4% increase in voltage for a 2% increase in clock at 2.23...
1.04^2 * 1.02 = 10%

The higher clock should add a linear increase by itself because there is that much more switching from the transistors. And the voltage is needed for both increasing the slew rate and to compensate the rail drop from it's impedance at really high currents. Maybe not enough metal for the current too, causing more losses. This gets crazy exponential.

The alternative is to run it at 800MHz with crazy high efficiency, but Cerny seems to have expressed a newly found preference for high clocks. :runaway:
I’ve seen Intel say it’s actually closer to cubic with voltage these days. It’s gotta be a pretty aggressive curve there. I honestly don’t understand how they’re going to yield millions of APUs there.
 
IFrom a pure technical writing standpoint, as long as it can hit the clock momentarily, it does deliver the said "up to" rate.

Again, untrue. It's not a matter of the duration at which it can run at that clock speed, but rather if it can do so at all with all ALU's in use.

Cerny himself said that the GPU would struggle to hit 2GHz if they targeted clocks for peak usage. Exactly the scenario you'd be describing when trying to test max computational throughput. So the amount of downclocking is going to depend entirely on how efficiently the running code can exercise the hardware. The more parallelism a game can extract from the hardware, the more power the chip will draw, the more clock it will shed to compensate. My guess based on his own comments is that the lower bound will likely be around the 2Ghz mark. Games that aren't as efficiently written and aren't actively engaging all the compute afforded by the chip will benefit from having that code execute at a higher clock speed as opposed to a lower fixed rate.

There is nothing preventing MS from doing the same thing on the Series X. They could go to dynamic clocks, and say when the GPU isn't fully loaded we'll boost it to 2GHz and claim 13.3TFLOPs, but it would be equally untrue when in order to engage all ALU compute it would top out at 1.825GHz.
 
Please don't infer from my post that the PS5 isn't an awesome machine. I think it is. I'm very excited for it. And as you move up the hardware performance scale, the power increase required relative to perceptible differences grows exponentially. It's just this whole TFLOP thing that has created a mess.
 
There is nothing preventing MS from doing the same thing on the Series X. They could go to dynamic clocks, and say when the GPU isn't fully loaded we'll boost it to 2GHz and claim 13.3TFLOPs, but it would be equally untrue when in order to engage all ALU compute it would top out at 1.825GHz.

Theur cooling solution might
 
It’s not as if MS didn’t recognize the need for a dynamic power envelope. That’s why they specified a different clock rate for SMT mode. I’d argue Sony’s method is the more elegant, but they also should have given developers the choice on SMT. Perhaps that’s too hard of a runtime switch to make...
 
The curious part of Sony's strategy from a developer's point of view is how the dynamic clocks which actually be managed. Perhaps there is some benchmark that looks at the worst-case utilization for your specific title for which you're given a specific power profile to use which keeps clocks constant that game. Or if it is indeed going to be variable while the game is actually running, which might make certain game systems more challenging to synchronize.
 
It’s not as if MS didn’t recognize the need for a dynamic power envelope. That’s why they specified a different clock rate for SMT mode. I’d argue Sony’s method is the more elegant, but they also should have given developers the choice on SMT. Perhaps that’s too hard of a runtime switch to make...
In DF's spec sheet mentions SMT for the CPU
 
Back
Top