Playstation 5 [PS5] [Release November 12 2020]

So the opposite of race-to-idle, run the CPU at the slowest clock speed your frametime budge allows. This doesn't bode well for VRR support and I still don't see how it would allow both CPU and GPU to average clocks above 2.0/3.0 if you're throttling the CPU to free up GPU power budget.
I think the honest answer is; it depends on what the developers want to do.
If they want high frame rate; they will just reduce the graphics settings low enough such that both are running their maximum clocks.
 
I think the honest answer is; it depends on what the developers want to do.
If they want high frame rate; they will just reduce the graphics settings low enough such that both are running their maximum clocks.
Of course. By itself, it's a good solution. Compared to XSX, not so much.
 
Is there any feature in the PS5 that would mitigate potential memory bandwidth issues?
none specific to PS5 announced yet. But they should have access to VRS and SFS type technologies. Other than that, there's no possible way that SSD could be a bottleneck on any game. So, I suppose there can be some changeson how engines work within the limits of the available memory.
 
So the opposite of race-to-idle, run the CPU at the slowest clock speed your frametime budget allows. This doesn't bode well for VRR support and I still don't see how it would allow both CPU and GPU to average clocks above 2.0/3.0 if you're throttling the CPU to free up GPU power budget.
To be honest I am still confused with the explanation.
 
It's because with portable and productivity devices (or IoT), the dominant power is the quiescent current (the amount of power when everything is powered but idling) and the power management have extremely low idling modes to counter this, so it does something for a few ms, then it shuts down almost completely for a second until the user clicks something, or an email is received, etc.... Also with low power devices there isn't as much of an exponential relationship with clocks.

With the high power used here, any reduction of quiescent current is dwarfed by the exponential (cubic!) power at the top of the frequency curve. And there isn't much opportunities to power down, if any.

So if you have a task for the frame which is using the max clock at 3.5, filled with AVX256 instructions, but it finishes within 80% of the time allowed before the next frame, you end up wasting a massive amount of power compared to running it at 3.0 for the 100% time allowed. There is zero difference in cpu performance between the two, but the former is consuming a LOT more energy, since power use is cubic, while the time slice is linear versus frequency.

So that means an additional chunk of watts "for free" which the GPU might have used at that particular point in the pipeline. Hence "squeezing every last drop". It's minimizing the possibilities of the GPU clocking down from 2.23, but the CPU would normally stay at the same effective performance as if it was always locked at 3.5. The power hungry device here is much more the GPU than the CPU, if there's any drop of frequency, it's the GPU that provides the most gain. It's just logical the CPU never drops unless it can do so for free.

The required real time profiling is no different from trying to predict a resolution downscale or LoD to keep the frame rate consistent, but it's doing the reverse. Estimate if the operation will finish much earlier than the next frame cycle, and lower the clock proportionately, with some safety margin?

Even on a pc cpu, with something like cool'n quiet for AMD, I believe your best option for saving power and reducing heat is to race to sleep. You want to use SIMD instructions as much as you can, get your work done 4x faster and then put the processor back into a low frequency or "off" state. Obviously if your frame rate is uncapped, you'll just be working all of the time, but if you cap your frame rate you'll save a lot of power in less demanding frames or engine ticks.

I don't know how idle states work on Xbox One X or Series X to know if they do this, or if they just waste power by sitting at max clock all the time. I'm not sure that Sony's strategy of spreading out work really makes sense from a power saving perspective, but it does seem like there are wins in terms of increasing average performance.
 
right, it's a one way direction you mean? That makes sense. the CPU will never need more power likely, the GPU is going to be 3x-4x more power hungry.

edit: nah, it's gotta go back and forth
There are at least 2 systems:

- Smartshift detects idle states on the CPU and give any available power to the GPU, so apparently based on load (not power consumption), but it's just from CPU to GPU and Smartshift doesn't modify the clocks.
- The variable clocks system detects loads of both CPU and GPU and set the clocks to keep the load at <=100% in total.
 
There are at least 2 systems:

- Smartshift detects idle states on the CPU and give any available power to the GPU, so apparently based on load (not power consumption), but it's just from CPU to GPU and Smartshift doesn't modify the clocks.
- The variable clocks system detects loads of both CPU and GPU and set the clocks to keep the load at <=100% in total.
That's what i wrote.
 
However the details shake out, it seems this is clearly a smart optimization.

I think no-one's denying that either. It is a smart solution for the situation they are in (36CU limit).

The disclosure of Sony's frequency curve at given activity levels is inevitable in my mind. Seems better to get out in front of it now, so there is time for people to forget prior to launch and allow the focus to shift to the games as it should. And honestly, I think that is what they are "trying" so hard to do now. Unfortunately, that obvious lack of transparency is going to continue to raise questions rather than simply putting them all to rest with hard facts.

Oh yes, certainly sometime somewhere that will come to light. If it matters is another thing though, they will need to come with games that appeal, aslong it wont be a PS3 in that regard, it doesnt matter if it's 9TF or 10TF or somewhere between. If it's another PS2, it's a buy for me no matter what.
 
I think no-one's denying that either. It is a smart solution for the situation they are in (36CU limit).

It's a smart solution in any situation. They way designed it there don't appear to be any drawbacks, just benefits over not having it.
 
It's a smart solution in any situation. They way designed it there don't appear to be any drawbacks, just benefits over not having it.

They more of had to, there are no drawbacks because otherwise, the clocks would not even be 2ghz for the GPU and 3 for the CPU. Therefore it has no drawbacks for Sony's PS5.
 
Oh yes, certainly sometime somewhere that will come to light. If it matters is another thing though, they will need to come with games that appeal, aslong it wont be a PS3 in that regard, it doesnt matter if it's 9TF or 10TF or somewhere between. If it's another PS2, it's a buy for me no matter what.
I agree and that's my point... even if it's 9.2 or whatever, just get it over with. It not going to impact sales. It's a miniscule percentage of people that would ever be aware. Previous investment in a given ecosystem is going to hold people in their existing camps. So why not just share openly with the people that care? They have nothing to be embarrassed about. It's going to have great games at the end of the day and that's all that will matter.
 
They more of had to, there are no drawbacks because otherwise, the clocks would not even be 2ghz for the GPU and 3 for the CPU. Therefore it has no drawbacks for Sony's PS5.

It's a good solution in a power or thermally constrained environment. That's why it was designed for laptops, but it does make sense in the console space. The only downside is whether devs that are trying to push the limits of the system find some dynamism in the performance annoying to work around. That of course would really come down to individual developers and how much variability the devs really notice. It sounds like any potential worst case would see clocks drop in the single digits, so it honestly shouldn't be too hard for them to work with.
 
Back
Top