Only for the same workloads. If a dev is getting 90% utilisation out of PS5, they should also be getting 90% utilisation out of XBSX and doing more on screen with that. The heat difference will be based on how much more heat is created at the higher clocks over the wider design.
52 CUs at 1.8 GHz will set our XBSX baseline heat output. 36/52 is 70% heat from PS5 based on width. Now if temp increases linearly with clocks, 2.2/1.8 would be 22% more heat, so 0.7 * 1.22 = 85% heat output versus XBSX. So it's only the exponential ramp of heat that risks pushing PS5 over the same heat output of XBSX. We'd need that 22% more clock speed to result in over 40% more heat to overtake MS on heat output.
Thinking out loud:
- I think temps are directly correlated with power usage over the area of the surface, in which this case a smaller chip has more more heat per mm^2 compared to a larger chip.
- Regarding clock speeds and voltage, that relationship is nearly exponential/logarithmic in shape.
- Regarding load, it should be #pixels * frame rate
When you look at these topics individually, I don't think the argument is _only_ at the same workload. Because of the diminishing returns of clockspeed/voltage, or exponential depending on how you want to look at the graph, once you hit the optimum you are increasingly getting less return on frequency for each mV you are putting in. So even on lesser loads, you could be working through more power.
From wikipedia:
While performance per watt is useful, absolute power requirements are also important. Claims of improved performance per watt may be used to mask increasing power demands. For instance, though newer generation GPU architectures may provide better performance per watt, continued performance increases can negate the gains in efficiency, and the GPUs continue to consume large amounts of power.
[32]
The efficiency of some electrical components, such as
voltage regulators, decreases with increasing temperature, so the power used may increase with temperature. Power supplies, motherboards, and some video cards are some of the subsystems affected by this. So their power draw may depend on temperature, and the temperature or temperature dependence should be noted when measuring.
[34][35]
So Cerny said the system was deterministic so that heat would not play a factor. But clearly that can't be true as per physics. There are thermal limits. We run 2 PS5s side by side and alter the heatsink to be woefully inadequate on one; I expect it to shut down.
If I make a PS5 operate in a small enclosed space (air tight box), I expect it to run into issues with voltage regulators eventually. So I'm curious to see what will happen in this scenario, will it down clock or will it shut down. If the latter that is deterministic, if the former then it is not. It is an easy test when the full retail units are released.
Not saying this won't happen but it would be a massive engineering failure.
Not necessarily. It could just mean more costs for cooling.
Don't think cooling the SSD will be much of a problem, MS seems to be able to cool their tiny SSD memory cards with a very small heatsink embedded on them, while they are somewhat slower, they are also smaller. Sony could put a bigger heatsink on their SSD.
Then they should guarantee the bandwidth if it's so trivial. MS worked with Seagate to develop a custom SSD that would guarantee 2.4GB/s as well as their external drives which are not 3rd party NVMes. That means they've already done their testing and it's proven for developers that they can rely on the 2.4 GB/s for whatever they want it for and for whatever features it support. 3P hardware nvme devices all throttle when in trouble with heat this is out of control for Sony. In reference to an older lower performing one:
So Sony must approve only drives that will be able to maintain their 5.5GB/s under heavy load while not throttling in the PS5 bay. There will be additional design considerations to support cooling this bay without necessarily knowing if users will have something in there.
I do not think it is easy to guarantee something as high as 2.4GB/s bandwidth in a small form factor, as nothing I can see (a laptop with a m.2 nvme slot) will support a nvme drive this fast. As written in this article by Techspot (
https://www.techspot.com/review/1893-pcie-4-vs-pcie-3-ssd/)
"
Even with the heatsinks on, the drives may still throttle if they are under sustained load for more than about 15 minutes." I would see this as a warning flag for trying to use the SSD as a RAM, or if to be used as RAM the customizations and cooling need to be significant to support this level of operation for play times as long as 3 hours.
These SSDs have to last the lifespan of the console, so how it will behave will be critical and I don't think it's a topic I would just say, hey it should be easy if MS can do it. MS did do it, and they did it at a cost of bringing the speeds to nearly 1/2 of what is offered out there, but they guaranteed it. And likely I suspect the plan is for it to last the lifespan of the console.
Or to put plainly; if it's so simple to guarantee performance consistency by slapping on more cooling, then why go with a reduce speed of nearly 1/2 of what is on the market today? Or are there factors we are not yet aware of. (though this could be linked with the fact that they wanted to do an external expansion port)
I hope Sony has made the expansion bay large enough to accommodate large heat sinks that may be on 7 GB/s versions.
. He said not everyone will fit.
Yea, I didn't think about this until after you mentioned it.
These are not real issues though. It can be waved away, but it will be done so with higher costs.