I mean, we'll have to wait and see. This is last bit of a puzzle we have related to next gen systems so we are in for few months of disscusion. At one point we had people saying 2.0GHz is impossible and Cerny is not an idiot to design fast and narrow console, yet, here we are... : )
Other arguments were that 2.0 GHz may not have been impossible, but that there was a good chance that it could threaten the consistency of the platform, not match the competition, and be problematic in terms of design and yields.
Sony appears to have done some things to mitigate the consistency problem, although we don't have a good description of the sorts of complexities this can add to development or evaluation versus a fixed-frequency platform. The performance inconsistency in the platform is theoretically deterministic, but the set of variables that goes into that function is larger and less-understood.
We do not have platform comparisons or benchmarks, and those in the best position to know the performance deltas and their cause likely won't be free to discuss them. However, it seems from the postures the vendors have adopted and the specs that this implementation doesn't quite match the competition as far as the elements of the platform determined by 36 CUs and high clocks are concerned. Maybe more data will become available, but the current framing is whether a non-trivial deficiency is noticeable and where the floor is for the weaker implementation.
Yields and knock-on effects throughout the console like the cooling setup are either awaiting further disclosure or are unlikely to be revealed.
Perhaps we can revisit the Xbox claims about guaranteed clocks in light of some of the scenarios given like very high wide-vector CPU utilization and an atypically high GPU utilization. The details on the power delivery and cooling do seem pretty generous, but there could be more details on how those situations are handled.
It seems likely that the PS5 designers had other target elements that influenced the design, such cost, volume, or power budget. I think there's a decent chance that some of those limits were based on market projections or financial considerations by those above the hardware teams. In that scenario, the PS5's design choices may have been the best that they could do within constraints Microsoft did not impose on its project. Whether that's fully the case also depends on where the PS5's PSU, bill of materials, and case wind up.
Projections about where the process technology would be, or where other elements like DRAM or software would be in the time frame could have made 36 CUs a rational choice, but then those inputs could have been found to be mistaken after it was too late to change.
If PS5 uses power consumption to determine clock speed how is that not going to result in variability among different units?
Cerny talked about an "idealized SOC". AMD's clocking method uses activity counters and small units on the die that perform representative electrical activity to the ALUs and other units on-die. These are paired with tables that indicate how much power cost certain actions have at various system states, and that data comes from physical testing of the chip to determine its physical and electrical variation.
The idealized SOC would represent some average set of silicon properties that all chips would meet, and the DVFS system would act like all chips had those properties.
Chips with better properties would leave their performance potential untapped, and chips that failed to meet that minimum would be rejected from being in the PS5.