I cant believe we still have people who deny the advantages of fixed spec optimization. Knowing exactly what architecture you are building for and the level of resources on offer lets you build around that in order to get better overall results.
I don't think anyone is denying it, we're just saying that the reality is messier due to several factors.
Only a handful of dev houses have the will, capability and budget to squeeze every last ounce of performance from those fixed specs. Naughty Dog and Insomniac are the exemplars. But then you have id and Nixxes on the PC side as well.
The vast majority of developers have to target a sliding window of multiple console platforms along with PC. Today you have PS4-Pro, PS5, Xbox One X, Xbox Series S, Xbox Series X (Switch is a special case). By the time the previous generation is abandoned, we will begin to see PS5 and XBS "pro" refreshes. Yeah this space is still 10x smaller than the space for PCs but it's not quite a clean fixed spec either, and very few dev houses have the means and impetus to optimize that hard.
It's just that most games are console-first (with good reason), and PC ports are often half-hearted efforts because the devs know that higher-specced PC hardware can compensate for sloppy ports by sheer brute force. So yes, the phenomenon does exist, though to me it's less of consoles punching above their weight and more of Class-X hardware being prioritized over Class-Y hardware.
On a side note, I'm not sure the DX12 and VK direction has worked out as intended. Exposing PC bare metal to devs only works if they are willing to take advantage of it. I'm not sure that's actually happening. DX11 allowed the hardware vendors to work their secret sauce under the covers.
Switch is a special case and fixed-spec optimization is certainly at play here, firstly out of sheer necessity and secondly because it actually is a single fixed spec (kinda). But I'm struggling to extrapolate the observations from this special case to the much higher-spec new Sony/MS consoles.
I suppose this is also true, my argument was definitely more GPU oriented. I'll bet there were huge CPU optimizations in the prior gen due to (a) the severe suckage of the Jaguar CPUs and (b) their in-order nature which lends itself to more careful optimization.
However, I think the situation is going to be dramatically different for the current gen. Those Zen2 CPUs are very capable, and they are modern superscalar out-of-order architectures with immensely capable hardware schedulers, branch predictors and cache prefetchers that you really don't have to manhandle, and the macroscopic code optimizations (tiling etc.) are already built into modern compilers. Basically it's a "welcome to 1995" situation for console CPUs (I'm being facetious (but only partially)).