Playstation 5 [PS5] [Release November 12 2020]

The GPU and CPU have frequency caps.

If designed correctly, it shouldn't have to run at max power in order for both CPU and GPU to hit their cap. There has to be some power room available to ramp up a bit before hitting max, and then it begins throttling frequency as required.

That being said, the fan should follow respectively.

But the power delivery budget is constant. It's running constant no matter what. The clocks ramp up to what the cooling system is designed for and then frequency only backs off if the workload exceeds a certain amount.

It's sounds to me that the power and cooling system are constantly fixed(at least when gaming)...but frequency is what changes.
 
and that backwards compatibility in fact will not cover full PS4 library on launch.
I think you may be right:
https://blog.us.playstation.com/202...ls-of-playstation-5-hardware-technical-specs/

"Lastly, we’re excited to confirm that the backwards compatibility features are working well. We recently took a look at the top 100 PS4 titles as ranked by play time, and we’re expecting almost all of them to be playable at launch on PS5. With more than 4000 games published on PS4, we will continue the testing process and expand backwards compatibility coverage over time."
 
But the power delivery budget is constant. It's running constant no matter what. The clocks ramp up to what the cooling system is designed for and then frequency only backs off if the workload exceeds a certain amount.

It's sounds to me that the power and cooling system are constantly fixed(at least when gaming)...but frequency is what changes.

There are outlier cases. Think of netflix, light indie games, pause screen etc. Not every game will demand everything hw can give all the time. Even in openworld games there could be things like looking at sky which would mostly idle the console. That said, sony would be idiots if they didn't design cooling to account for the most demanding games and I have a feeling cerny is not an idiot even if sony sometimes has been.
 
There are outlier cases. Think of netflix, light indie games, pause screen etc. Not every game will demand everything hw can give all the time. Even in openworld games there could be things like looking at sky which would mostly idle the console. That said, sony would be idiots if they didn't design cooling to account for the most demanding games and I have a feeling cerny is not an idiot even if sony sometimes has been.

I guess I am just saying the system seems to be designed to be boosting to certain threshold constantly.

I am not complaining about the design. I mean theoretically this way you predetermine max power and cooling systems..and therefore can optimize those systems.
 
I guess I am just saying the system seems to be designed to be boosting to certain threshold constantly.

I am not complaining about the design. I mean theoretically this way you predetermine max power and cooling systems..and therefore can optimize those systems.

But it's not 'boosting'

The chips will run at 3.5Ghz/2.23Ghz all the time and only in very very very specific instances will they drop lower. No game uses every transistor of a chip 100% of the time but if a game every does on PS5 it'll down clock every so slightly.

But games are so dynamic you'll never see 100% load on everything to cause such a drop for 99.9% of the time.

Perfect example is PC with monitoring on, you never see a situation where both the CPU and GPU are both at 100% load at the same time as the scenes are too dynamic.

Sony should of called it something else as it's with it's current name it's just getting people confused with how boost works on PC parts.
 
I guess I am just saying the system seems to be designed to be boosting to certain threshold constantly.

I am not complaining about the design. I mean theoretically this way you predetermine max power and cooling systems..and therefore can optimize those systems.

The thing is gpu speed and cpu speed is capped so that if you idle the cpu then gpu cannot really suck up all the free power and vice versa. Cerny mentioned that gpu speed is not capped by available power. There is something in gpu implementation that sets max clock and that is not heat/power delivery. Similarly using cpu in regular way cannot eat all the available power, rather it requires using specific power hungry avx instructions. So being able to hit the max power draw is not guaranteed even though it's easier than before thanks to dynamic clocks.
 
But it's not 'boosting'

The chips will run at 3.5Ghz/2.23Ghz all the time and only in very very very specific instances will they drop lower. No game uses every transistor of a chip 100% of the time but if a game every does on PS5 it'll down clock every so slightly.

But games are so dynamic you'll never see 100% load on everything to cause such a drop for 99.9% of the time.

Perfect example is PC with monitoring on, you never see a situation where both the CPU and GPU are both at 100% load at the same time as the scenes are too dynamic.

Sony should of called it something else as it's with it's current name it's just getting people confused with how boost works on PC parts.

Not "boosting" but in a boost mode 100% of the time I meant.
 
The chips will run at 3.5Ghz/2.23Ghz all the time and only in very very very specific instances will they drop lower.

Where, when and how much is yet to be clarified. Its kinda vague right now. If it would be like a drop 1% of the time why even bother. It wouldnt even have to be mentioned. Its not what i heard either.
 
Where, when and how much is yet to be clarified. Its kinda vague right now. If it would be like a drop 1% of the time why even bother. It wouldnt even have to be mentioned. Its not what i heard either.

Cerny has stated 'a couple of percent' drop in frequency so it'll still just over 10Tflops.......and unless anyone has any hard data relating to clock speeds for PS5 Cerny's word is it for now.

This isn't a good metric to go by. What if Playstation only had 1 CU then? It would have 466 GB/CU. That wouldn't outperform Series X at all.

What if PS5 only had one mega CU that was as fast as Series X total CU's? It's completely relevant in the topic of the machines and there's no need to theoretical this and that as we know the specs of both machines.
 
Last edited by a moderator:
Cerny has stated 'a couple of percent' drop in frequency so it'll still just over 10Tflops.......and unless anyone has any hard data relating to clock speeds for PS5 Cerny's word is it for now.

Aslong as he didnt provide us with any data himself, theres no way theres no room for discussion. He could be talking in context, it was also a pr presentation. We need to know more to just assume 2%. If that was the max, which would almost never be the case as he said, why even bother implementing it, or even better, why not have it downclock 2% and have a stable system, if it doesnt make any difference anyway.
 
What if PS5 only had one mega CU that was as fast as Series X total CU's? It's completely relevant in the topic of the machines and there's no need to theoretical this and that as we know the specs of both machines.
Engineering limitations. First you need a large enough bus to feed they CU the full 466 Gb/S into a single CU. Unlikely. You would need a bus so large that exceeds normality.
Then you have a finite number of registers available for processing.
Then you would have to redesign the CU entirely to do multiple shader jobs simultaneously.
There are a slew of other problems you face, but the reality is, power on GPU has come from going more parallel with more processors not less.
 
That's nonsense. PS5 has to divide available memory bandwidth between GPU and CPU, so effective GPU memory bandwidth will be much lower than the theoretical maximum and less deterministic. XBSX GPU will have the fast memory all to itself (unless devs are morons) and CPU will almost always use slower memory. That way it will be much, much easier to actually utilize GPU resources to the max and do heavy data lifting on the CPU at the same time (think BVH for example). I expect the XBSX to run around PS5 in circles when it comes to RTRT performance.

10Gb of RAM for the GPU vs 10Gb+ of RAM for the GPU with stupid fast I/O??

There's one clear winner there for me and it's not the first option, I expect after developers get used the SSD on PS5 it'll run circles around Series X in regards to asset quality, asset resolution and asset density.

Aslong as he didnt provide us with any data himself, theres no way theres no room for discussion. He could be talking in context, it was also a pr presentation. We need to know more to just assume 2%. If that was the max, which would almost never be the case as he said, why even bother implementing it, or even better, why not have it downclock 2% and have a stable system, if it doesnt make any difference anyway.

I don't think you've understood the presentation.
 
Last edited by a moderator:
Engineering limitations. First you need a large enough bus to feed they CU the full 466 Gb/S into a single CU. Unlikely. You would need a bus so large that exceeds normality.
Then you have a finite number of registers available for processing.
Then you would have to redesign the CU entirely to do multiple shader jobs simultaneously.
There are a slew of other problems you face, but the reality is, power on GPU has come from going more parallel with more processors not less.

You have serious read way too much in to it.
 
As far as I understand that AMD Smartshift is handled automatically at the hardware level though..actually reading about it seemed somewhat pointless for a console. On a PC you are running gaming and non-gaming applications therefore shifting power between the CPU and GPU makes sense...but PS5 is a dedicated gaming device.
I think he did specifically said its not like Smartshift and it will be a decision, which tbh is no big deal. I don't expect alot of difference between few % up or down anyway, IMO its mostly PR thing. You will not notice if PS5 is 10.2TF or 9.8TF.

I also see many people expecting PS5 to be cheaper then XSX by $100. Where does this come from?

I expect XSX to have ~15-20$ higher BOM and I doubt they will price it out of PS5's range. I think $499 for both.
 
You have serious read way too much in to it.
I’m just pointing out the obvious.
Bandwidth per CU is not a useful metric.
Bandwidth per TF is much more useful.
The less CUs you have, you would need an obscenely high clock. You would need 4500 MHz clock on the GPU with half the number of CUs PS5 has. Then multiply that by 18 to bring it down to 1 CU
 
However, what were their testing conditions? 40 degrees C environments without aircon? Will some countries have a crappier experience because they are hotter?
Cerny stressed the clocks won't vary with temperature or cooling performance. They'll vary with power consumption.
The bottleneck for not keeping max clocks 100% of the time will be the PSU and voltage controls, not the temperature.

The difference between throttling down due to temperature and power consumption is that with the later you can guarantee the same performance among all consoles (they're all using the same PSU and VRMs), so that devs can predict system performance.

Of course, I'm guessing that on a silly edge case if you put the console in an oven then it'll throttle from the temperature (like all SoCs in consoles from the last 15+ years AFAIK?). I bet almost all home consoles would eventually throttle if you put them in a 40ºC room with no airflow, and no home console manufacturer designs their consoles for that scenario.


now im confused. Just how many boosts are in PS5?
I don't know if Sony themselves used the word "boost" for the clocks, or if DF was the one to loosely use that term.
In reality it's not a boost like we've seen on PC GPUs. It's throttling down from a typical value on edge cases, based on power consumption.
Sony did use the term "boost" for backwards compatibility, which is where they already used the term boost for when the PS4 Pro runs PS4 games with the GPU at 911MHz instead of 800MHz in non-boost.
On PS5's Boost BC mode, the console will be running PS4 Pro games with the GPU clocked at 2.23GHz instead of PS4 Pro's 911MHz, together with the much faster CPU and memory bandwidth.
It'll be interesting to see e.g. Death Stranding running at >60 FPS on 120Hz HDMI 2.1 TVs.


Those avx2 instructions are power hungry and not every engine uses them a lot. It's very typical even for desktop cpu's to run at lower clock when avx2/avx512bit is used.
This will be an important point to know, since there's a lot of PC CPUs around with no AVX2 compatibility (my old 10-core Xeon E5 v2 Ivy Bridge may finally need to be replaced), and others with poor AVX2 performance (Zen 1/1.5 included).
Zen2 has no AVX512 BTW, so that one isn't coming to new engines on consoles for sure.
 
I’m just pointing out the obvious.
Bandwidth per CU is not a useful metric.
Bandwidth per TF is much more useful.
The less CUs you have, you would need an obscenely high clock. You would need 4500 MHz clock on the GPU with half the number of CUs PS5 has. Then multiply that by 18 to bring it down to 1 CU

Or just one hypothetically large CU running at 2Ghz to combat your hypothetical CU.

Seriously. Lets move on.
 
Back
Top