Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
but you are not forced to downclock both if i remember correctly.
We just don't know.
It always depends on the power limits. The solution Sony chose does not sound as there is enough headroom to get max frequencies all the time even if only one component is under stress.
You can stress a GPU (100% load) with and draw 100W and you can chose other algorithms and have 300W power draw (only on the GPU). So it depends how sony chose all this. They have a big box so much room for a cooling solution and still need techs like smartshift to reroute power from the CPU to the GPU. Because of this I suspect, that the chip area is too small to distribute the heat fast enough, else they could just give both components "full power" which is actually quite cheap if your cooling solution can handle the heat (e.g. increase fan speed .. which sony did on PS3, PS4 & PS4 Pro so they haven't a problem with a loud cooling solution :) ).
But if the chip area is too small to handle heat you can't just give more juice to your components because sooner or later you have to throttle them because the heat would damage the APU.
With a bigger chip you have a bigger chip area that can more easily transmit the heat to the cooler.

Although it's possible, I don't think a typical situation would ever encounter that scenario and I suspect if it did occur, it wouldn't be common or it would be for very few cycles or frames.
which could lead to uneven frametimes or worse.
But as a developer I would just target a smaller "target" and hope for the best (e.g. let dynamic res do its work).
 
Also remember that XBSX will have 52CU's so it has different set of requirements for it's power draw and heat.

Just to add to what @iroboto wrote above, with heat one issue can be the contact area you have to transfer it to the heat sink. So in the case of the XSX the area used by the CUs should be larger (even accounting for possible customisations), and the total chip area should be larger too.

So for the same power consumed by the GPU you should have an easier time getting rid of that heat into the heatsink on XSX. Perhaps this explains some of the talk about PS5 cooling and exotic thermal compounds to connect the chip and heatsink.

I'm guessing that in some cases the PS5 will consume more power than the XSX (even with the XSX memory arrangement), but in times of very heavy loading the XSX can sustain greater total heat output from the chip simply due to CU area. But that's a guess and I guess we'll see what happens!
 
We just don't know.
It always depends on the power limits. The solution Sony chose does not sound as there is enough headroom to get max frequencies all the time even if only one component is under stress.
You can stress a GPU (100% load) with and draw 100W and you can chose other algorithms and have 300W power draw (only on the GPU). So it depends how sony chose all this. They have a big box so much room for a cooling solution and still need techs like smartshift to reroute power from the CPU to the GPU. Because of this I suspect, that the chip area is too small to distribute the heat fast enough, else they could just give both components "full power" which is actually quite cheap if your cooling solution can handle the heat (e.g. increase fan speed .. which sony did on PS3, PS4 & PS4 Pro so they haven't a problem with a loud cooling solution :) ).
But if the chip area is too small to handle heat you can't just give more juice to your components because sooner or later you have to throttle them because the heat would damage the APU.
With a bigger chip you have a bigger chip area that can more easily transmit the heat to the cooler.
We don't know what the chip looks like yet. Also the XBSX GPU is apparently 13% bigger than the XBO APU with a smaller node so the chance is the PS5 chip might also be big. Then there's all the exotic cooling patents that's been coming out lately that points to the PS5 having more than adequate cooling to not make a lot of noise or heat.
 
Last edited:
We don't know what the chip looks like yet. Also the XBSX GPU is apparently 13% bigger than the XBO APU with a smaller node so the chance is the PS5 chip might also be big. Then there's all the exotic cooling patents that's been coming out lately that points to the PS5 having more than adequate cooling to not make a lot of noise or heat.
If they would have such a good cooling solution, they wouldn't need technologies like smartshift, which is actually made for mobile chips. They should just have provided the "bit" more power that is needed to max it out at all times and it would be great ;)
Smartshift only makes sense if you have a problem with the heat (like in laptops), because using a bit stronger power supply is not really a problem in such a stationary big box and it doesn't cost that much more (a few cents at most). Smartshift just makes a "simple" system much more complicated to max out.
 
You forgot that it was only audio RT at one point too.
Forgot that one! One more for the list lol.


As indicated earlier, we've seen ample evidence all boost and game clocks are within 10% of their max.
We have? Where? For discrete RDNA1 GPUs or for unreleased RDNA2 GPUs?


Different chips will have different thermal properties. Some chips are bound to run hotter than others running the same code, this is parametric yield.
If you want to keep costs down, you're going to have to allow for larger thermal allowances. If game code keeps frequencies high into the 95-99% range at all times, you're going to have to start dropping off lower yield chips.
The alternative is to have your frequencies throttle down earlier, to allow for lower quality chips.
This is an exercise in futility because you have no idea what the yields are for the rated frequencies.



We're making sensible predictions on what Sony would be willing to target based upon how much they want to release the console for.
Any prediction made on a RDNA2 GPU based on RDNA1 typical clocks isn't sensible at all.
AMD is claiming a 50% improvement in RDNA2 performance compared to RDNA1, which AMD hasn't been able to do since the RV770 a dozen years ago.
They claim it's from a mix of higher GPU clocks at ISO power consumption and higher IPC, and we don't know on which proportions this is.

r7VuV70.png




For all we know, it could be just 7% on IPC and a whole 40% on increased clocks, meaning the average 1.8GHz RDNA1 GPUs now compare to a whopping 2.5GHz on RDNA2. And that would put the PS5's iGPU on an actually conservative range of what RDNA2 GPUs can achieve.


AMD never really got their Maxwell -> Pascal frequency boost. GF's 14nm wasn't anywhere near as performant as TSMC's 16FF, Polaris and then Vega clocked way below their expectations, and then RTG had to switch foundries which probably made things more difficult. Back in 2017 we heard of the Zen engineers going to RTG to help with the raising clock process and this is probably the very first architecture we'll see with that input. Clocks on RDNA2 are a wildcard for the moment.


Now imagine the whole frequency range scaling up by about 15% for next gen, and the difference between a conservative (Furmark style, like MS used for the 360) base and expected max boost doing the same. Hmm. 350 x 1.15 = 402.5 mHz.

Hmm. I wonder what the difference between XSX constant clocks and PS5 max boost is? 2.23 - 1.825 = 0.405. Or 405 mhz.
Didn't you just extrapolate numbers out of completely made-up numbers? What was your point?
And for RDNA2 GPUs to scale on frequency just 15% over RDNA1, that would mean the IPC advantage on RDNA2 over RDNA1 is a whopping 30%. Isn't that even more than the IPC boost fron GCN5 to RDNA1?
 
What uses more power is how much of your GPU is flipping states.
Both CPUs and GPUs have transistors that draw more in a fixed state than neighbouring transistors flipping states constantly. I think you're mentally putting all transistors as near equals in terms of type, material, purpose and power draw but this is not the case, particularly with "3D transistors" because the complexity of supporting multiple gates from a single transistor is part of the appeal (and consequence) of the design. You can use flipping to determine a level of activity, but not power draw. This is what makes Sony's potential implementation so intriguing.

It could be really dumb, as in just allowing the CPU and GPU to draw more and more power until the PSU taps out, relying on AMD's power management aggressively shutting down unused CPU/GPU blocks.
 
Somehow it's controversial to make a sensible prediction that such a tolerance would be between 90-100% of their marketed frequencies. Of which Mark was the one to hand out the 2.0Ghz number unattainable fixed and he also handed out the 10% drop in frequency number to provide back 27% power.

It's not that people are trying to create a narrative around it being a 9.2TF chip. If we use Mark's own numbers, this is where it could operate at. It just so happens to turn out that 10% is 2000Mhz. And 10% is 9.2TF.

If PS5 is operating between 9.2TF and 10.2TF that's fine. But some people are unwilling to accept any possibility of it hitting anything less than 10.2TF here. That isn't variable, that's just fixed. That's been the on-going discussion for some time here. There's not a lot of room here to move either. Any drop in frequency below 2170Mhz would put PS5 below 10.TF. That's a 60Mhz drop.

I don't think any of this will have any bearing on anything, but if poeple are upset that at B3D these conclusions were hit, that's unfair.

Even with the discussion around the 2Ghz Github. was it wrong for people to speculate that it would run at less than 2Ghz? There was no rumour it was variable clocks. This is the only time a console ever ran variable clocks. Cerny himself admitted that even fixed they could not achieve 2Ghz. Doesn't that actually validate the fact that people were right that 2Ghz was too high (under the assumption of fixed clocks)?.
Can you find the 10% frequency drop quote, because my crappy memory thinks he said (something along the lines of) ‘and even if it did drop as a 2% drop saves 10% power any drop will be minimal and infrequent’.

I will say one last time that the issue with github was as I have repeatedly said. People thought 2ghz wasn’t possible and therefore 9.2tf was not possible so they were using it to spin BS that it was 8 vs 12 and XSX 50% more powerful...do I need to get quotes?
 
Both CPUs and GPUs have transistors that draw more in a fixed state than neighbouring transistors flipping states constantly. I think you're mentally putting all transistors as near equals in terms of type, material, purpose and power draw but this is not the case, particularly with "3D transistors" because the complexity of supporting multiple gates from a single transistor is part of the appeal (and consequence) of the design. You can use flipping to determine a level of activity, but not power draw. This is what makes Sony's potential implementation so intriguing.

It could be really dumb, as in just allowing the CPU and GPU to draw more and more power until the PSU taps out, relying on AMD's power management aggressively shutting down unused CPU/GPU blocks.

No I'm not. Even Cerny is referencing the same thing:
Mark Cerny counters. "I think you're asking what happens if there is a piece of code intentionally written so that every transistor (or the maximum number of transistors possible) in the CPU and GPU flip on every cycle. That's a pretty abstract question, games aren't anywhere near that amount of power consumption. In fact, if such a piece of code were to run on existing consoles, the power consumption would be well out of the intended operating range and it's even possible that the console would go into thermal shutdown. PS5 would handle such an unrealistic piece of code more gracefully."

Dynamic Power equation for IC is Ps (Power static to have the chip on, nothing is variable) + Pv (dynamic power draw based on frequency and activity level) where Pv = cv^2fa
Transistors that have a fixed power draw are captured in Ps, and transistors that do flip still require power to operate which is incorporated into Ps as well.
The dynamic power portion of the equation is determined by CV^2fa. Where a is the activity level, C is capacitance, V is voltage and F is frequency.

Unless you have some material for me to read for me to understand your point better, I have really never understood any of your points. I've just largely not responded to them because I don't know what I'm responding to or where to even begin. You've never cited a resource for me to read.
 

1) 36 CU's - for more graphical effeciency, easier to keep CU's busier more of the time.
2) Cooling solution for Power Draw
3) CPU uses 256bit instructions
4) Continuous Boost Mode, increased frequency to match cooling solution
5) Designed Cooling solution to meet power draw
6) Main Custom Chip
6.1) Activities of CPU and GPU monitored to set frequencies to make everything repeatable
6.2) Smartshift Tech to move any unused power from CPU to GPU to deliver more pixels. Fixed frequency didn't allow for over 2.0Ghz however with Smartshift, able to Cap
frequency to 2.23Ghz to guarantee that onchip logic operates properly.
6.3) Expect GPU to spend most of it's time at 2.23Ghz or close to frequency.
6.4) Same stratergy used for CPU which spent most time at Capped 3.5Ghz.
6.5) To reduced power by 10%, only take a couple of % reduction in frequency however expects down-clocking to be minor. (38m03s)
 
Can you find the 10% frequency drop quote, because my crappy memory thinks he said (something along the lines of) ‘and even if it did drop as a 2% drop saves 10% power any drop will be minimal and infrequent’.

I will say one last time that the issue with github was as I have repeatedly said. People thought 2ghz wasn’t possible and therefore 9.2tf was not possible so they were using it to spin BS that it was 8 vs 12 and XSX 50% more powerful...do I need to get quotes?
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-dive
Cerny also stresses that power consumption and clock speeds don't have a linear relationship. Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency," Cerny emphasises.
I can work this out for you if you'd like.
If PS5 drops about 10% power, it's ~ 2153Mhz down from 2230Mhz.
2153 from 2230 is 4.5% reduction in frequency and becomes 9.92 TF

Regardless I mean in Road to PS5 Cerny says the following:
it's a completely different paradigm rather than running at constant frequency and letting power vary based on the workload we run at essentially constant power and let the frequency band vary based on the
workload
we then tackled the engineering challenge of a cost-effective and high-performance cooling solution designed for that specific power level
What they chose for that specific power level will ultimately determine how much frequency is moved around. That's the main limiter here. Nothing else really. You set a high enough power profile and the frequency will never drop. But now you got to contend with heat and price issues in which we need to have a discussion about developing a cost reasonable solution for mass market.

You have to imagine that if PS5 clockspeeds are barely budging then Mark wouldn't write this:

Mark Cerny sees a time where developers will begin to optimise their game engines in a different way - to achieve optimal performance for the given power level. "Power plays a role when optimising. If you optimise and keep the power the same you see all of the benefit of the optimisation. If you optimise and increase the power then you're giving a bit of the performance back. What's most interesting here is optimisation for power consumption, if you can modify your code so that it has the same absolute performance but reduced power then that is a win. "

Anyway, this is still neat to me, because this is often a HPC computer science topic now a days, as we're reaching some limitations of computation. I don't think this is going to matter really, but if we're having a technical conversation, my expectation is to see movement on a variable clock setup on a fixed power profile. We currently see movement on clockspeeds with a variable clock setup and a variable power setup with thermals being the limiter for performance.

As for 2Ghz discussion.
Yes.. and they weren't wrong. 2Ghz was not possible and Cerny addressed this and indicated 2Ghz was not possible with fixed clocks. The switch to boost clocks made it possible.
Considering no other console had done boost clocks before, it's not fair to penalize those people to say they were purposefully spreading fud (not to say they may not be spreading an agenda). People were operating with known quantities. That's the right thing to do, we use the basis of what we know to make assumptions on the future. If things are different, we make adjustments. Someone should have put 2 and 2 together to say, well this 2Ghz test isn't a lie. But we know it's not possible to hold 2Ghz fixed at a reasonable price point, how else can this be achieved? (hint: someone would have hopefully said Boost Clocks).

But we didn't and probably because as you say, perhaps people were more vested into spreading BS than it was to actually figure out the truth.
 
Last edited:
Didn't you just extrapolate numbers out of completely made-up numbers? What was your point?

The numbers for the 5700XT running Furmark vs it's stated boost clock aren't made up. A stated max boost of 2.23 for PS5 compared to 1.1905 (the 5700XT) is 1.17 and I rounded it down to 1.15 because I was being lazy. But hell lets go with 1.17.

My simple extrapolation comes from applying that 1.15 figure (or 1.17, doesn't change much) to the 5700 XT minimum Furmark clocks and stated boost clocks (which are a little conservative).

But now lets have some more fun with 1.17 !!

Take the 1575 mhz Furmark score measured by Tom's. Multiply it by 1.17. And you get .... 1,842.75 mhz. MS's fixed clock is increasing above 5700XT's Furmark clock by almost the same proportion as Sony's boost clock is increasing above the 5700XT's (also slightly conservative) boost clock.

No surprise really. Sony and MS are both working with AMD and looking at the same architecture and process. They're likely to be getting similar gains for their respective clocks.

And for RDNA2 GPUs to scale on frequency just 15% over RDNA1, that would mean the IPC advantage on RDNA2 over RDNA1 is a whopping 30%.

I'll go with 17% base and boost and game or whatever clocks. And I'd take a bet not more than 25%, at least not based on PS5 and XSX. And not with AMD recommended power and voltage defaults. Cerny even said they were having trouble making the chip even function much over 2.23 and I can't think things will be significantly different for RDNA2 on PC.

Not sure where your 30% IPC increase comes from tbh. I've only seen figures for 50% perf/watt increase, and you could get that by going wider and relatively slower*.

*Edit: slower relative to the clocks you are now able to reach.

Edit2: plus of course they're always looking at ways to save power inside the chip regardless of IPC or frequency or process!
 
Last edited:
If PS5 drops about 10% power, it's ~ 2153Mhz down from 2230Mhz.
2153 from 2230 is 4.5% reduction in frequency and becomes 9.92 TF

From the "road to PS5" video:

Mark Cerny said:
To reduce power by 10% it only takes a couple in percent in frequency.

How did you go from "a couple" to 4.5%?
I mean this is in the post directly above yours, and you can also watch the video at the 38m03s timestamp @Unknown Soldier gave away...
2153MHz? Where is this number even coming from?

I keep seeing numbers that seem completely made up here, even to the point of going directly against pretty explicit quotes from the PS5's lead system architect.
What's the point, and how is this valuable for the discussion?

The numbers for the 5700XT running Furmark vs it's stated boost clock aren't made up. A stated max boost of 2.23 for PS5 compared to 1.1905 (the 5700XT) is 1.17 and I rounded it down to 1.15 because I was being lazy. But hell lets go with 1.17.

And why are RDNA1 numbers on a GPU power virus like furmark even remotely relevant to the amount of time the PS5's RDNA2 GPU will be at 2.23GHz? Are Microsoft or Sony going to have power viruses available on their online store?
I just don't see how the PS5's boost needs to ever be similar to a desktop graphics card's boost based on a previous architecture. First because Cerny has repeatedly said the GPU spends most of its time at 2.23GHz whereas the 5700 XT does not spend most its time at 1905MHz, second because the console's boost needs to work differently due to the fact that the PS5's boost is solely dependent on power consumption whereas whereas a desktop card will just churn out higher clocks if the ambient temperature is lower, and third because it's a new architecture.


I'll go with 17% base and boost and game or whatever clocks. And I'd take a bet not more than 25%, at least not based on PS5 and XSX.
When have console GPUs ever dictated the maximum clock rates of PC GPUs? When the PS4 came out with a 800MHz GPU we had Pitcairn cards working at 1000MHz, with boost to 1050MHz.
That's a 25% higher base clock with boost giving it a 32% higher clock, but boost has come a long way since GCN1 chips.


And not with AMD recommended power and voltage defaults. Cerny even said they were having trouble making the chip even function much over 2.23 and I can't think things will be significantly different for RDNA2 on PC.
What are AMD's recommended power and voltage values for the PS5 SoC, and how do they compare to the discrete PC graphics cards?


Not sure where your 30% IPC increase comes from tbh. I've only seen figures for 50% perf/watt increase, and you could get that by going wider and relatively slower.
The claims are for the architecture, i.e. for a lineup of GPUs that will go from (at least) mid-range to high-end, and not just specifically "Big Navi". I'm pretty sure the Macbook Pro's Navi 12 has a lot more than a 50% perf/watt increase over a Radeon 5700 XT, so making that 50% claim for "wider and slower" doesn't make much sense if they could have done it within the same architecture.
 
How did you go from "a couple" to 4.5%?
I mean this is in the post directly above yours, and you can also watch the video at the 38m03s timestamp @Unknown Soldier gave away...
2153MHz? Where is this number even coming from?

I keep seeing numbers that seem completely made up here, even to the point of going directly against pretty explicit quotes from the PS5's lead system architect.
What's the point, and how is this valuable for the discussion?
Dynamic Power equation.
Cerny states that a reduction of 10% in frequency relates to a 27% drop in power.
The equation is P = CV^2fa
considering C and a are constant
that leaves V^2f, where f varies proportionally with Voltage. So P = v^3 or put in another way voltage is cubic with frequency.
So simple example is 2230Mhz - 10% is 2007Mhz.
Which means 2007 is 0.9 the frequency of 2230Mhz.
So it just becomes (0.9 * V)^3
or 0.9 ^3
which is 0.729
1-0.729 * 100 is 27%

So he states that a 10% reduction in frequency equates to a 27% drop in power, the equation proves this perfectly.

Using the same equation, we can reduce the power by 10% and see it's outcome on frequency.
Which is just 0.9 ^ (1/3) * 2230.

I reversed the equation and the result is 2153Mhz
 
Last edited:
One thing people should remember is GPU utilization falls if there isn't enough memory bandwidth. GPU could be 4GHz and it wouldn't matter.

Also one thing I have noticed is GPUs are more power efficient when it is memory bandwidth limited even if it is clocked higher, not really strict but the RX5600XT seems generally more power efficient than RX5700s, while clocks are lower as well on the reference model, this RX5600 is clocked at 5700xt speeds and is still about as efficient as the 5700:
performance-per-watt_1920-1080.png

The efficiency increases but the maximum performance is not there as even at about 5700XT clocks, with way more FLOPs than the 5700, the 5600XT is still significantly less performant than the 5700:
relative-performance_1920-1080.png
 
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-dive

I can work this out for you if you'd like.
If PS5 drops about 10% power, it's ~ 2153Mhz down from 2230Mhz.
2153 from 2230 is 4.5% reduction in frequency and becomes 9.92 TF

Regardless I mean in Road to PS5 Cerny says the following:
What they chose for that specific power level will ultimately determine how much frequency is moved around. That's the main limiter here. Nothing else really. You set a high enough power profile and the frequency will never drop. But now you got to contend with heat and price issues in which we need to have a discussion about developing a cost reasonable solution for mass market.

You have to imagine that if PS5 clockspeeds are barely budging then Mark wouldn't write this:



Anyway, this is still neat to me, because this is often a HPC computer science topic now a days, as we're reaching some limitations of computation. I don't think this is going to matter really, but if we're having a technical conversation, my expectation is to see movement on a variable clock setup on a fixed power profile. We currently see movement on clockspeeds with a variable clock setup and a variable power setup with thermals being the limiter for performance.

As for 2Ghz discussion.
Yes.. and they weren't wrong. 2Ghz was not possible and Cerny addressed this and indicated 2Ghz was not possible with fixed clocks. The switch to boost clocks made it possible.
Considering no other console had done boost clocks before, it's not fair to penalize those people to say they were purposefully spreading fud (not to say they may not be spreading an agenda). People were operating with known quantities. That's the right thing to do, we use the basis of what we know to make assumptions on the future. If things are different, we make adjustments. Someone should have put 2 and 2 together to say, well this 2Ghz test isn't a lie. But we know it's not possible to hold 2Ghz fixed at a reasonable price point, how else can this be achieved? (hint: someone would have hopefully said Boost Clocks).

But we didn't and probably because as you say, perhaps people were more vested into spreading BS than it was to actually figure out the truth.

So he doesn't state it.

It may be a fact, it may have been said - but Cerny did not state it.

For all you know DF asked what a 10% frequency drop would save in power - because nowhere have I seen Cerny state it.
 
So he doesn't state it.

It may be a fact, it may have been said - but Cerny did not state it.

For all you know DF asked what a 10% frequency drop would save in power - because nowhere have I seen Cerny state it.
DF wouldn't have written that if Cerny didn't say it. That was a deep dive interview with Cerny that DF had with him after the presentation.

For me the picture is about 2 items.
a) How much power can the chip handle
b) How much power are you willing to provide

All chips have a maximum amount of power they can sustain before they cannot anymore. That is just wattage/mm^2.
When that number gets too high, no amount of cooling will be able to cool it. Like trying to cool the sun or something. That's why Iron Man can't exist, he's running several Megawatts in something the size of 6cm diameter, with no cooling, it's sitting in his freaking chest cavity. It's defies all laws of physics.
In the same way, we can't put that much power into a smaller space, and a smaller die will run into power limits before a bigger die will. Its the same reason that when we run high voltage, the cables need to be thicker and thicker or the wires will melt.

Cerny indicated that they could not achieve 2.0Ghz fixed for all scenarios. That means there were workloads that would likely exceed their watt/mm^2 or PSU limits. So that is a hint on the watt/mm^2 limitation and also what their PSU is likely rated for. By enabling boost clocks, you can still use the same PSU and have higher clock rates and not run into the watt/mm^2 limitation provided the activity level was not those cases. And when it hit those rates, the clocks would just have to come down.

Which means, there must be workloads that for this particular PSU that is rated for a SOC of a specific watt/mm^2 that has cases below 2.0Ghz, exist. And it is not unreasonable by any means to suggest that going from fixed to boost is a 10% bonus in clocking.
 
Last edited:
One thing people should remember is GPU utilization falls if there isn't enough memory bandwidth. GPU could be 4GHz and it wouldn't matter.

One would think Cerny would have worked out how much bandwidth the PS5 will need and how much memory was required. Likely why they never went the 512GBps route as was rumoured and stayed with 448GBps.

Also remember that Cerny said they have a coherency engine that informs the GPU of over writtern address ranges and cache scrubbers that that pinpoint eviction of the addresses. 19m17s

The cache scrubbers are also unique to the PS5 GPU and not available to other custom RDNA2 GPU's(XBSX). 25m55s
 
DF wouldn't have written that if Cerny didn't say it. That was a deep dive interview with Cerny that DF had with him after the presentation.

For me the picture is about 2 items.
a) How much power can the chip handle
b) How much power are you willing to provide

All chips have a maximum amount of power they can sustain before they cannot anymore. That is just wattage/mm^2.
When that number gets too high, no amount of cooling will be able to cool it. Like trying to cool the sun or something. That's why Iron Man can't exist, he's running several Megawatts in something the size of 6cm diameter, with no cooling, it's sitting in his freaking chest cavity. It's defies all laws of physics.
In the same way, we can't put that much power into a smaller space, and a smaller die will run into power limits before a bigger die will. Its the same reason that when we run high voltage, the cables need to be thicker and thicker or the wires will melt.

Cerny indicated that they could not achieve 2.0Ghz fixed for all scenarios. That means there were workloads that would likely exceed their watt/mm^2 or PSU limits. So that is a hint on the watt/mm^2 limitation and also what their PSU is likely rated for. By enabling boost clocks, you can still use the same PSU and have higher clock rates and not run into the watt/mm^2 limitation provided the activity level was not those cases. And when it hit those rates, the clocks would just have to come down.

Which means, there must be workloads that for this particular PSU that is rated for a SOC of a specific watt/mm^2 that has cases below 2.0Ghz, exist. And it is not unreasonable by any means to suggest that going from fixed to boost is a 10% bonus in clocking.

It's irrelevant, you're using it like a tool to say 'well why would Cerny say it unless that's the expectation?' - we're discussing taking something out of context to spin a narrative.

I'll tell you something Cerny has stated, and I'll use quotes marks - because it's something he's actually stated and not something that's been written by someone else and then being used with the word stated;

“When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10 per cent it only takes a couple of percent reduction in frequency, so I’d expect any downclocking to be pretty minor.”
"The system has enough power to enable the CPU and GPU to operate at maximum frequencies of 3.5 GHz and 2.23 GHz. Developers don't need to choose which ones to slow down."

From these statements, if the PS5 is running 10% slower for anything above minimal times I will gladly call Cerny a shifty Sony suit. My guess this is not going to happen and why he's not in any way quoted saying it.

So please stop saying someone states something unless, you know, they state it.

Edit - please don't take this as an attack - I know you're a clever guy and I'm not the sharpest tool in the box, but I just don't get the banging on about 10% frequency drop because the implication is that PS5 will run frequently/a lot of the time like that and I'm totally not getting that from Cerny.
 
Last edited:
The SoC has a fixed power limit in which it must divide between CPU and GPU.
There is enough power to run both at 100% frequency.
When the activity levels start surpassing the allotment provided to either CPU or GPU, the frequencies of the CPU or GPU will need to drop because it cannot sustain the activity level at that frequency. If you don't want it to drop, then you need to borrow from the other side. If you keep increasing the activity level, eventually you won't be able to borrow anymore and still be forced to downclock.

I'm perplexed by this, it's probably just me being thick - but if there's enough power to run both at 100% then how can activity levels surpass that (100%)?
 
Status
Not open for further replies.
Back
Top