The GPU and CPU have frequency caps.
If designed correctly, it shouldn't have to run at max power in order for both CPU and GPU to hit their cap. There has to be some power room available to ramp up a bit before hitting max, and then it begins throttling frequency as required.
That being said, the fan should follow respectively.
I think you may be right:and that backwards compatibility in fact will not cover full PS4 library on launch.
But the power delivery budget is constant. It's running constant no matter what. The clocks ramp up to what the cooling system is designed for and then frequency only backs off if the workload exceeds a certain amount.
It's sounds to me that the power and cooling system are constantly fixed(at least when gaming)...but frequency is what changes.
There are outlier cases. Think of netflix, light indie games, pause screen etc. Not every game will demand everything hw can give all the time. Even in openworld games there could be things like looking at sky which would mostly idle the console. That said, sony would be idiots if they didn't design cooling to account for the most demanding games and I have a feeling cerny is not an idiot even if sony sometimes has been.
I guess I am just saying the system seems to be designed to be boosting to certain threshold constantly.
I am not complaining about the design. I mean theoretically this way you predetermine max power and cooling systems..and therefore can optimize those systems.
I guess I am just saying the system seems to be designed to be boosting to certain threshold constantly.
I am not complaining about the design. I mean theoretically this way you predetermine max power and cooling systems..and therefore can optimize those systems.
But it's not 'boosting'
The chips will run at 3.5Ghz/2.23Ghz all the time and only in very very very specific instances will they drop lower. No game uses every transistor of a chip 100% of the time but if a game every does on PS5 it'll down clock every so slightly.
But games are so dynamic you'll never see 100% load on everything to cause such a drop for 99.9% of the time.
Perfect example is PC with monitoring on, you never see a situation where both the CPU and GPU are both at 100% load at the same time as the scenes are too dynamic.
Sony should of called it something else as it's with it's current name it's just getting people confused with how boost works on PC parts.
The chips will run at 3.5Ghz/2.23Ghz all the time and only in very very very specific instances will they drop lower.
This isn't a good metric to go by. What if Playstation only had 1 CU then? It would have 466 GB/CU. That wouldn't outperform Series X at all.Or you could look at it as PS5 has 12.44GB's per CU and Series X has 10.76GB's per CU.
Where, when and how much is yet to be clarified. Its kinda vague right now. If it would be like a drop 1% of the time why even bother. It wouldnt even have to be mentioned. Its not what i heard either.
This isn't a good metric to go by. What if Playstation only had 1 CU then? It would have 466 GB/CU. That wouldn't outperform Series X at all.
Cerny has stated 'a couple of percent' drop in frequency so it'll still just over 10Tflops.......and unless anyone has any hard data relating to clock speeds for PS5 Cerny's word is it for now.
Engineering limitations. First you need a large enough bus to feed they CU the full 466 Gb/S into a single CU. Unlikely. You would need a bus so large that exceeds normality.What if PS5 only had one mega CU that was as fast as Series X total CU's? It's completely relevant in the topic of the machines and there's no need to theoretical this and that as we know the specs of both machines.
That's nonsense. PS5 has to divide available memory bandwidth between GPU and CPU, so effective GPU memory bandwidth will be much lower than the theoretical maximum and less deterministic. XBSX GPU will have the fast memory all to itself (unless devs are morons) and CPU will almost always use slower memory. That way it will be much, much easier to actually utilize GPU resources to the max and do heavy data lifting on the CPU at the same time (think BVH for example). I expect the XBSX to run around PS5 in circles when it comes to RTRT performance.
Aslong as he didnt provide us with any data himself, theres no way theres no room for discussion. He could be talking in context, it was also a pr presentation. We need to know more to just assume 2%. If that was the max, which would almost never be the case as he said, why even bother implementing it, or even better, why not have it downclock 2% and have a stable system, if it doesnt make any difference anyway.
Engineering limitations. First you need a large enough bus to feed they CU the full 466 Gb/S into a single CU. Unlikely. You would need a bus so large that exceeds normality.
Then you have a finite number of registers available for processing.
Then you would have to redesign the CU entirely to do multiple shader jobs simultaneously.
There are a slew of other problems you face, but the reality is, power on GPU has come from going more parallel with more processors not less.
I think he did specifically said its not like Smartshift and it will be a decision, which tbh is no big deal. I don't expect alot of difference between few % up or down anyway, IMO its mostly PR thing. You will not notice if PS5 is 10.2TF or 9.8TF.As far as I understand that AMD Smartshift is handled automatically at the hardware level though..actually reading about it seemed somewhat pointless for a console. On a PC you are running gaming and non-gaming applications therefore shifting power between the CPU and GPU makes sense...but PS5 is a dedicated gaming device.
I’m just pointing out the obvious.You have serious read way too much in to it.
Cerny stressed the clocks won't vary with temperature or cooling performance. They'll vary with power consumption.However, what were their testing conditions? 40 degrees C environments without aircon? Will some countries have a crappier experience because they are hotter?
I don't know if Sony themselves used the word "boost" for the clocks, or if DF was the one to loosely use that term.now im confused. Just how many boosts are in PS5?
This will be an important point to know, since there's a lot of PC CPUs around with no AVX2 compatibility (my old 10-core Xeon E5 v2 Ivy Bridge may finally need to be replaced), and others with poor AVX2 performance (Zen 1/1.5 included).Those avx2 instructions are power hungry and not every engine uses them a lot. It's very typical even for desktop cpu's to run at lower clock when avx2/avx512bit is used.
I’m just pointing out the obvious.
Bandwidth per CU is not a useful metric.
Bandwidth per TF is much more useful.
The less CUs you have, you would need an obscenely high clock. You would need 4500 MHz clock on the GPU with half the number of CUs PS5 has. Then multiply that by 18 to bring it down to 1 CU