Well, I disagree. PS5 dynamic clock system shows us the clockspeed is not the only factor. What also matters is what kind of instructions are used and how many of them are used by cycles. Besides the architecture is rather different. PS5 uses mainly RDNA 1 / 2 architecture (L1/L2 caches, CU by SE) while XSX has a custom architecture not seen on any RDNA 1 or 2 desktop GPU. Maybe this architecture focusing on compute prevented them to increase the clocks the way RDNA2 GPUs and PS5 did. About the compute thing maybe it's because of their compute cloud servers? that's what Spencer told us years ago; that XSX has being designed for gaming and compute cloud servers, but I digress.
And when they test the clockspeed / power consumption (and look for some kind of sweatspot) for yields they must test for the maximal power consumption possible by the APU for instance using Furmark, not an average of power consumption. Remember what Cerny told us here: Without dynamic clocks they could not even reach 2ghz because I assume in some rare cases, even if very short, the system can consume the max at those clocks. In the case of XSX that maximum power consumption that can be reached is very similar to PS5 max power consumption, hence their static clocks being relatively so low because for the yields they must plan for that maximum power consumption. So maybe with that architecture and those clocks they have the same yields as PS5.