Playsaves3
Newcomer
I would agree that the cpu is bottlenecking the 2070 by quite a bit the problem is you have a double standard on where that applies wouldn’t the ps5 gpu which has the same level cpu as used in the video also be bottlenecked in that same situation and in Essence still leave the gpu in an equivalent position. What it sounds like you want is for the pc gpu to be alleviated from cpu bottlenecks but still allow the ps5 gpu to be cpu bottlenecked and I’m sorry but that’s a travesty. We wouldn’t do a gpu benchmark between a 3090 and 3060 but one has a 12900k and the other a 12600k would we?There are two issues here:
1. You state that you're simply comparing how your specific machine (a common combination of CPU and GPU) performs vs the PS5 at PS5 like settings. That would be completely fair IMO if you actually presented it that way in the videos. But you don't. You show whole reels of footage (RT performance mode) of the 2070 underperforming in CPU limited scenario's while providing commentary on how many percent faster the PS5 is than that GPU. That's simply wrong because you are not measuring GPU performance there, yet you are giving the impression of the GPU itself being heavily outperformed by the PS5. I'll grant you do also discuss CPU limitations during those scenes but you repeatedly flip flop the messaging back to direct GPU performance comparisons. At one point you even criticize "other" content producers for using high end CPU's (specifically the 12900k, so no prizes for guessing who that criticism was targeted at) in their GPU comparison pieces which is in fact exactly what they should be doing, and what you should be doing if you want to make direct GPU to GPU performance comparisons. You isolate the tested components performance by ensuring no other component acts as a bottleneck.
I know you also added a whole section which is likely not CPU limited in the Fidelity matched settings comparison, but that entire comparison is seemingly invalidated by the VRAM limitation which I'll give you the benefit of the doubt on and assume you simply didn't realize at the time of putting out the video. Sure you can argue that you're still showing the relative performance of the systems because that's simply how much VRAM the component that you're testing. But if you're going to do that then you need to make it very clear to the viewer that the 2070 isn't being fully utilised at these particular settings because it's VRAM limited. But you don't present it that way. In fact you never once mention VRAM limiting the GPU's frame rate there and instead frame it as the PS5 GPU simply being more powerful and/or more efficient due to running in a console environment.
2. if you're truly only wanting to show the relative experience possible on two specific systems as opposed to a deeper architectural analysis of those systems performance potential then the basis of the comparison is unfair to begin with. If you want to show how the experience on your machine compares in Spiderman vs the PS5 then limiting the testing to PS5 matched settings which are suboptimal for that machine and ideal for the PS5 is skewing the result. The VRAM limitation which has been demonstrated spectacularly in this thread is a perfect example of that. It's a fact, that the 2070 is lagging the PS5 in VRAM capacity, but it's also (generally speaking) matching or exceeding it in RT capability and has tensor cores to provide superior upscaling quality at a given internal resolution. So while showing the matched PS5 settings is valid, it should be balanced by showing what experience is possible at more PC favoring settings. In this case that may have been something like High textures with very high ray tracing resolution and geometry, 16xAF and DLSS Performance which on balance should provide a much better balance of graphics and performance better suited to the 2070's strengths and compare far more favorably with the PS5 experience.
So call it out as a bug specific to the PC version. Give the PS5 version all the kudos you want for not having that bug, but don't frame that bug as some kind of general architectural deficiency of the PC platform which "requires a $500-$600 CPU to address".
That's fair enough, but see above. It's not that fact that you call attention to the difference, it's how you frame it as a platform architecture advantage rather than what it is - a software bug.
But framing this as the solution to the problem (which it isn't because VRAM often has no impact on the issue), thus re-enforcing the false argument that this is simply an architectural deficiency that needs to be brute forced past in the PC... is wrong.
VRAM has been proven incontrovertibly within this thread as a limitation on the 2070 at the settings you are using.
I don't think anyone has claimed that the CPU is a bottleneck in GPU bound scenarios, only pointed out that you make GPU performance comparisons in CPU bound scenario's.
Personally I think the game is a great port, but it was also a mammoth undertaking and is open to much closer scrutiny than the vast majority of ports and so inevitably, a few fairly serious bugs have been identified, but as you state, Nixxes seems to be quick to fix them, hopefully that will be the case with the VRAM under-allocation and mip loading issues. Until then, the onus is on testers like you to ensure the public is properly informed where these bugs impact testing and not to use them to make potentially misleading claims about platform architectural advantages or raw performance claims.
That's a really good find. I didn't even know that column existed in task manager but I'll be using it future! iroboto beat me to it but I just wanted to second his statements about your excellent contributions in this thread. It's certainly change the way I think about VRAM. Very informative!