To answer to the critique put forward here: we have heard from devs making games which target heavy CPU usage that the GPU in PS5 does in fact downclock. But whether an end user notices this in a game with TAA, post-processing and DRS is a whole other question. That is the point of the PS5 Design.
For example - think of a game with an unlocked framerate to 120 with DRS targetting a high output res. How exactly does that fit into a fixed and shared power budget? The obvious answer is it stresses both CPU and GPU to their max and power adjusts
Of course it downclocks. Cerny told us that 2 years ago! But does it noticeably impact the final performance? the answer is in the vast majority of time, no. This is what we have seen in many open world games that performs as well or even better than expected. In the end the PS5 is behaving like a 10TF machine (even better in many cases) even in those demanding game and the dynamic clocks system does not impact on actual games performance. It's not only because of DRS or TAA; Cerny patiently explained it in several interviews:
- The downclocks can be very short (a few ms) so that means on average the drop will be meaningless on the whole 16ms frame when they occur.
- Some drops will have no impact on performance as the system (CPU or GPU) can actually downclock if it reckons it won't be used (idle) during this time, as explained by Cerny:
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-diveThere's another phenomenon here, which is called 'race to idle'. Let's imagine we are running at 30Hz, and we're using 28 milliseconds out of our 33 millisecond budget, so the GPU is idle for five milliseconds. The power control logic will detect that low power is being consumed - after all, the GPU is not doing much for that five milliseconds - and conclude that the frequency should be increased. But that's a pointless bump in frequency...At this point, the clocks may be faster, but the GPU has no work to do. Any frequency bump is totally pointless. "The net result is that the GPU doesn't do any more work, instead it processes its assigned work more quickly and then is idle for longer, just waiting for v-sync or the like. We use 'race to idle' to describe this pointless increase in a GPU's frequency...So, when I made the statement that the GPU will spend most of its time at or near its top frequency, that is with 'race to idle' taken out of the equation...The same is true for the CPU
We don't need to make such mysterious questions. Have you recently noticed the PS5 having dropped frames or resolution during demanding scenes? For instance using the currently most demanding (and next-gen) benchmark, the RT / compute heavy UE5 demo. Do you think the PS5 behaves like a 9 Tf machine compared to a 12TF machine here?But whether an end user notices this in a game with TAA, post-processing and DRS is a whole other question.