Well yes (and no) - I'm actually agreeing that Control plays better (or at least as good when stutter is fixed) on PS5 in all modes - it's a wash.
I believe Alex has drawn the conclusion that because of the photo mode showing such a good performance gain (well, to the expected level of gap) that 'if dropped to 1080p XSX could run RT capped at 60'.
I can see why he has concluded this from the photo mode, but in the same breath we have the 2 consoles in gameplay neck and neck. So my conclusion (and I'm no expert - Alex knows a lot more than me!) is that something in the gameplay element of the game is dragging the XSX back...as such if both were dropped to 1080p 60 with RT I'd personally expect a similar performance.
From memory (in this tread) I believe Alex said the texture streaming is off in photo mode and others have said the CPU is not really utilised at all in photo mode...I would ass-u-me if those were being utilised the graphs would level off to a similar figure (from my ass) of 30-50?. We've seen examples where the photo mode varies wildly (from 32 to 60+) so you can imagine such a varying FPS would be jarring and hence the 30FPS cap.
I guess the main question is, has the PS5 got something that in game boosts performance (eg IO/cache scrubbers) or is the XSX being held back due to immature dev kits (or whatever the technical term is)...I'd guess the latter, this photo mode (for me) cements the XSX potential and the paper specs show it shouldn't be any more bottle-necked than PS5 (unless I missed something).
Quite interesting, isn't it
?
Personally, given this game is a relatively quick port by a small porting team, I think calling some of the performance discrepancy on the devs here is valid, as crummy as that talking point can sometimes get (because it might insinuate "lazy devs", which isn't the case here at all).
However, if it IS something more to the lack of tools maturation and/or familiarity by devs, then the question has to be asked: how long will it take before the tools or familiarity with them is actually at a good point? Because the longer it drags out, the more that hurts optics.
If, though, it's down to something in the hardware design, well from this benchmark it clearly wouldn't be the GPU causing issue, in fact this would also disprove the talking point that the GPU is harder to saturate on the frontend (that's been a particularly popular one in the grapevine, ignoring the fact there are literal 80 CU GPUs on the market and rumored 160-CU chiplets with RDNA 3 coming so CLEARLY CU saturation is not a problem with AMD's architecture xD).
So, I had a brief thought that if anything, it could be the CPU. I don't know how CPU-intensive this game is though with all the environmental deformation, those seem like they'd be CPU-intensive operations, unless the game is utilizing GPGPU asynchronous compute to handle the physics calcs there (with how crappy the XBO/PS4 CPUs were, this might've been the case).
Anyway, I figure since the difference in CPU speeds for both systems when in SMT Mode is only 100 MHz, and MS's CPU is handling a bit of the I/O stack management (a very small amount but still more than compared to PS5's CPU in any case), there may be a case that MS's CPU isn't providing enough room to issue out the drawcalls for the GPU to saturate it at a sufficient rate on certain events to keep the framerates locked as consistently as Sony's.
Considering the difference in GPU clocks and the game running at same settings on both, if we assume it's saturating PS5's GPU at peak clocks you'd need about 44 of the Series X CUs saturated to match the PS5's workload. I'm guessing that's, what, maybe a 10% - 11% extra in resources to saturate on Xbox's side? Though again back with the Photo Mode benchmarks we're seeing the 16% - 18% difference in favor of Series X on the average (with peaks well above that), so I'm just curious what CPU you'd need to hit that in actual gameplay?
So, just me spitballing here but, maybe it's simply a case the CPU needs to be clocked faster still in SMT Mode to ensure the GPU can be saturated with work and issued its drawcalls at a rate it can be fully maximized given the spec headroom over PS5's GPU, and also taking into account the CPU on MS's side still needs to handle some of the I/O stack plus management of the segmented memory at least on some level by the OS running on the reserved core. That'd be my first guess if we get to a point where we have to start asking if it's something ingrained in the hardware design causing the results we're seeing regularly between both platforms.
Also almost forgot to say but, none of this would necessarily also prevent pondering on what Sony may've done to optimize for their performance results simultaneously. It doesn't really have to be an either/or. For example there's the cache scrubbers as you've mentioned, but there's also the somewhat pestering speculation of unified L3$ on PS5's CPU. If that happens to be true (and that's a very BIG if), for operations within SMT Mode I feel that'd give PS5 the advantage even with a 100 MHz clock speed disadvantage. That'd mean easier time issuing the drawcalls to the GPU and that ultimately would mean higher framerates if settings are same or similar on it and Series X versions of games.
Again though, this is all just speculation to keep in the back pocket until we see more results to truly ascertain what may or may not be happening; for now I predominantly peg performance differences in Control between the systems on time/resource crunch from the porting team, and maybe some tools immaturity/unfamiliarity behind that.