Hard disagree here. I think Guerrilla with Decima basically solved the reconstruction problem on consoles for sharp, clean and 60fps games with Horizon Forbidden West on PS5 (native res being only half CBR 1800p so about native 1270p which is pretty decent for alphas). Every others developers should look at what they have done here. With FSR2 they are reconstructing from too low, cost is super high cause they try to target too high final output. Here for Skull & Bones:
Ah crap you're exactly right, completely forgot that HZD:FW was 1800p. Yeah it took a bit to get it settled, but checkerboarding in there is miles ahead of FSR2 scaling from comparative resolutions - at least with what I've seen so far. I'd say Remnant 2 at least, had a
decent FSR2 implementation considering how low the internal res was.
Now CBR in FW isn't perfect; it can mangle some aspects
such as the neon signs which almost look unrecognizable compared to native, but those are relatively rare.
Why developers don't use more CBR? Well because it's hard to get right obviously! Plenty of things must be looked at, some effects must be done at a higher resolution and such. Even Guerrilla had trouble to get it right at first, and they are one of the most talented out there. But once the work has being done, the 30fps mode has being rendered obsolete for many.
That's what I've heard, it's a bitch to wrangle. But probably could have been squeezed into an AAAA game with an 11 year development cycle, I think.
They can only be sure for the PC version, not the console using underclocked laptop CPU. From 60 - 80fps I'd say as a general rule we can suppose most games will be CPU limited on consoles notably when many games use DRS so reducing the risk of being GPU limited.
This is why you need to use a modest CPU on PC as well to see which games start to be CPU limited with such a weaker CPU! This is not some unreasonnable request here! This has being done for decades on PC vs PC benchmarks, for good reasons.
It's a rather pointless request though when the goal of a video was to compare
GPU's, and you do that by purposefully stressing
GPU limited scenarios
, which is exactly what Rich did. If the bulk of his video was testing 120fps uncapped modes on the PS5 vs the PC and the video was more about comparing equivalent budget gaming system performance, the complaint about the CPU being mentioned would have merit, but that's not what he did by design.
As has been detailed in this thread, there are only two benchmarks in that entire suite where there's even the faintest possibility of them being CPU limited, and was have data we have on how those games perform with weak CPU's - they're not
remotely bottlenecked by CPU performance. Absolutely ancient CPU's, weaker than the PS5's (which is not a weak CPU!) can blow past those framerates when put into CPU limited scenarios and those games. This, on a system with the 'overhead' of a PC API vs console.
As I said with MHR - perhaps the strongest (cough) case for being CPU limited - you don't even need to look at PC benchmarks for that! The supersampled ~6k mode had the exact same performance discrepancy between the PS5 and 6700 as the 4k, 120fps mode. It's GPU limited in both. There is simply no rational reason to believe the CPU used had any bearing on the results shown in that video with the tests performed. It's fine to want a different video testing different things, but there was nothing wrong with the methodology used in that one, it properly tested exactly what it was testing.
Here's MHR in a CPU-limited scenario on a 5600X btw: ~300fps. It's time to give up this argument folks.