Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
It will reduce clock speeds according to power-usage (well actually after a predefined schema according to specific workloads) so the chip never goes above a specific "power stage".
So it is hard to say when the chip reaches which clock speed. Developers get fixed profiles so they can test what they can do under specific frequencies.
According to Cernys words, even >2Ghz is not really possible to hold stable under every workload. So we just can wait and see what happens.

My question was so poorly worded. I think that I have a general idea on how the ps5 dynamic clockrates work (lending power through smart shift, different consumptions depending on workload).

What happens to consoles with fixed clock speeds? Is the clockrates so fixed that only power draw can fluctuate or do consoles with fixed clocks allow devs to cap their games to lower clockspeeds, lower than what it's rated?

xbsx is said to be at fixed 3.8 without SMT but can a dev instead cap that to 3.0 while drawing the same power as it usually does at 3.8 or are they better off focusing on the 3.8 fixed figure.

The whole differing workload is throwing me off with fixed consoles.

I'm imagining that for fixed clocked consoles that the clockspeeds don't change and only the power consumption can change because different workloads have different power consumption as long as all of them are within the allotted power budget. Is this correct?

I appreciate the patience.
 
My question was so poorly worded. I think that I have a general idea on how the ps5 dynamic clockrates work (lending power through smart shift, different consumptions depending on workload).

What happens to consoles with fixed clock speeds? Is the clockrates so fixed that only power draw can fluctuate or do consoles with fixed clocks allow devs to cap their games to lower clockspeeds, lower than what it's rated?

xbsx is said to be at fixed 3.8 without SMT but can a dev instead cap that to 3.0 while drawing the same power as it usually does at 3.8 or are they better off focusing on the 3.8 fixed figure.

The whole differing workload is throwing me off with fixed consoles.

I'm imagining that for fixed clocked consoles that the clockspeeds don't change and only the power consumption can change because different workloads have different power consumption as long as all of them are within the allotted power budget. Is this correct?

I appreciate the patience.
Yea with fixed clocks, as (activity) loads increase since you cannot downclock to reduce power requirements, the system will keep adding more power to meet demand until there is a shutdown or a crash.
 
Looking at just GPU Utilization you’d be unlikely to be able to guess its impact on clock rate.
Which begs the question of how PS5 deals with the trifecta of activity, clock and power.
 
Xsx cpu has 2 operating modes
  1. 3.8Ghz locked SMT disabled
  2. 3.6Ghz locked SMT enabled

Developer selectable

Would you still be able to cap it lower? If you do cap it lower and eats up the entire power budget for the processor, does that mean you're program is quite terrible?
 
Those are the only selections the developers have.
 
Which begs the question of how PS5 deals with the trifecta of activity, clock and power.
They can debug their processor during design to actually give them those specific answers.

I don’t think it’s possible to understand those values without debugging.
 
They can debug their processor during design to actually give them those specific answers. I don’t think it’s possible to understand those values without debugging.

The PS5 determines the power/clockspeed for code, I'm interested in the logic that dictates this.
 
Which begs the question of how PS5 deals with the trifecta of activity, clock and power.
The only thing that deals with power is smartshift when it allocates some to the GPU and it should only need to know the CPU activity for that. There shouldn't be no need for power consumption sensors: not for smartshift and not for variable clocks.

I highly suspect that when Smartshift actually gives some power to the GPU then it also raises the activity cap in order to allow the GPU to work more with that additionnal power. I think it's really a fascinating tech and the future of all home consoles.
 
Doesn't the power needed to run the code dictate the clock speed?
I don't know, does it? :runaway:

The only thing that deals with power is smartshift when it allocates some to the GPU and it should only need to know the CPU activity for that. There shouldn't be no need for power consumption sensors: not for smartshift and not for variable clocks.
There is a fixed power budget but there will be power-hungry CPU and GPU features. You could easily consume more power at lower frequencies and less power at higher frequencies - the example Mark Cerny used was 256-bit instructions. So what determines the clockspeed?
 
I don't know, does it? :runaway:


There is a fixed power budget but there will be power-hungry CPU and GPU features. You could easily consume more power at lower frequencies and less power at higher frequencies - the example Mark Cerny used was 256-bit instructions. So what determines the clockspeed?
I am not sure I understand what you don't understand here. They must have done plenty of tests in order to know the precise relationships between activities, clocks and power consumption.
 
I am not sure I understand what you don't understand here. They must have done plenty of tests in order to know the precise relationships between activities, clocks and power consumption.
I think he’s asking for the technical portion of it. I do somewhat recall reading how AMD accomplished this type of dynamic clocking on laptops (separate from power shift)


A new System Management Controller was critical to enabling these enhancements according to AMD. Ryzen 4000 series mobile processors have a dedicated microcontroller to monitor activity from many parts of the SoC (IP blocks and buses). Observed activity is constantly monitored and the data is used to adjust clock rates and voltages accordingly. Higher utilization, will increase clocks, while low utilization, decreases clocks – within power and thermal limits, of course.
https://hothardware.com/reviews/amd-ryzen-4000-series-mobile-processors
 
Last edited:
I am not sure I understand what you don't understand here. They must have done plenty of tests in order to know the precise relationships between activities, clocks and power consumption.

Actually, what was quoted was: "One of our breakthroughs was finding a set of frequencies where the hotspot - meaning the thermal density of the CPU and the GPU - is the same. And that's what we've done. They're equivalently easy to cool or difficult to cool - whatever you want to call it.". You have clockspeed. You have the relative/absolute activity of the CPU and GPU. You the thermal profile. And you have power.

I still don't understand why he would not understand that. Dynamic clocks are not something new. Before that it was based on thermal contraints, now it's based on activities contraints.
And power. Power is finite. Mark Cerny talks about activity and power, which impacts thermals. As does the relationship between clockspeed and activity. :yep2:
 
There’s no breakthroughs here.

You have a thermal budget, you have a power budget. Boost algorithms float between the two. If you’re going to let boost determine overall performance you need to pick between fluctuating framerate or dynamic resolution.

Now the good thing is that amd’s boost is pretty sophisticated, very quick to act and can do so in very small increments. The drawback is that it’s sensitive to outside factors so run to run variance is there.

AMD does it on the CPU side for single core performance because their all core drops clocks due to heat and power draw for 7nm process. Boost gives them ability to keep Intel in check on single core and light workloads but once the load is there, the frequency will drop up to 400mhz+ Depending on the load.

If AVX continues to pick up steam the cpu will need to drop often due to the spike in power draw and instant heat. If not, then the frequency fluctuations will be easier to manage and more predictable.
 
Last edited:
Actually, what was quoted was: "One of our breakthroughs was finding a set of frequencies where the hotspot - meaning the thermal density of the CPU and the GPU - is the same. And that's what we've done. They're equivalently easy to cool or difficult to cool - whatever you want to call it.". You have clockspeed. You have the relative/absolute activity of the CPU and GPU. You the thermal profile. And you have power.


And power. Power is finite. Mark Cerny talks about activity and power, which impacts thermals. As does the relationship between clockspeed and activity. :yep2:
Of course. But to adjust the clocks the retail PS5s should only see activity level. But I think I understand you now. You want all the secret methodology behind the whole tech, how they ran their tests during years, the whole relationships between power, clocks and activities. And the power margins because of silicon lottery and local climate. Why didn't you simply ask ? I'll contact Cerny immediately using PSN PM. ;)
 
I still don't understand why he would not understand that. Dynamic clocks are not something new. Before that it was based on thermal contraints, now it's based on activities contraints.
What’s interesting is that the Xbox has it too, but in a much more gross sense. There’s an implied higher level of switching activity in SMT, so MS dropped the clocks to compensate. I think this is flying under the radar because the conditions are much more cut and dry. I think both are interesting and novel solutions though. MS’s add developer choice while also allowing a “set and forget” so they’re not making their jobs harder.

I wonder if Series S will have identical CPU modes and clocks to ease compatibility concerns?
 
Status
Not open for further replies.
Back
Top