Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
In this comparision the resolution setting of the PS5 version of Last of Us doesnt look like real 4K:

Not in the actual gameplay part at least.
 
That actually looks fairly reasonable to me. A 3600x seems to be coming in around 60fps which is what you would expect vs the PS5.

GPU performance on the other hand...
Nah a 3600 is in the 40s often - Not at all Like ps5
The game is probably compiling shaders in the background as you play despite having waited for the compilation to finish at the beginning.
Nah I waited 2 hours and the percentage Had Long reached 100. I would never start a Game before shaders are done.
 
Last edited:
In this comparision the resolution setting of the PS5 version of Last of Us doesnt look like real 4K:

Not in the actual gameplay part at least.

So a quick summary from watching the video:

  • PS5 seems to be equal to medium for draw distance
  • PS5 seems to be equal to medium for shadows
  • Not sure on the GI but maybe medium again
  • Difference between high and ultra textures basically none existent
  • PS5 looks to be using medium for water
  • PS5 looks to be using high reflections

So I would guess that PS5 is pretty much running the medium pre-set with the odd high setting.
 
That actually looks fairly reasonable to me. A 3600x seems to be coming in around 60fps which is what you would expect vs the PS5.

GPU performance on the other hand...
For reminder PS5 CPU performance is around 1700x (or 2700). It's an underclocked Zen 2 mobile variant with much less cache than the desktop Zen 2 CPU. Maybe to be considered when using middle or high end CPUs and use this to compare the GPUs.

Those benchmarks should always be used to compare the combination of CPU + GPU (even ram). So say a 3600x + 3600 performs XX% above PS5 1700x + 5700 OCed. On PC we can directly compare GPUs because we can easily use the same exact MB + ram + CPU combination so the comparisons between GPU become fair which is not possible when compared agains consoles.
 
For reminder PS5 CPU performance is around 1700x (or 2700). It's an underclocked Zen 2 mobile variant with much less cache than the desktop Zen 2 CPU. Maybe to be considered when using middle or high end CPUs and use this to compare the GPUs.

Those benchmarks should always be used to compare the combination of CPU + GPU (even ram). So say a 3600x + 3600 performs XX% above PS5 1700x + 5700 OCed. On PC we can directly compare GPUs because we can easily use the same exact MB + ram + CPU combination so the comparisons between GPU become fair which is not possible when compared agains consoles.

And the CPU is doing more stuff than the CPU on PS5. They use Oodle Kraken and they decompress on the CPU. Other stuff like 3d audio are done on the Tempest engine where are they done on the CPU or the GPU on PC? This is probably one of the reason with 3600x doesn't performing as good as the PS5 here out of better optimization on PS5 and thinner API.

Maybe part of the VRAM usage comes from the fact they need to tailor a streaming solution working on HDD too.
 
(albeit we'll see if that Oodle dll replacement is actually beneficial in terms of at least precompile time)

Speaking of such, I asked over on ResetEra for people who have tried it and noticed any difference in performance or shader compiling time, and every response I got from people that actually measured it was no - no measurable effect.

Always be wary of anecdotal miraculous 'fixes' by some guy on a forum that doesn't provide actual data and is just being repeated from others by linking back to the original claim. No, a slightly different DLL has not decreased shader compile time by an order of magnitude as it turns out, go figure.
 
When it comes to MP titles, performance isn’t compromised on PC, it’s compromised on console.

Considering the state of PC ports recently, and the fact that for MP titles the consoles will sell - sometimes significantly - more, I'm not so sure of that. I'm sure they're not as optimized 'as they could be', in terms of a first party studio like ND with an exceeding long development time, budget and crunch, sure - I'm not so sure a game lacking quite that level of hyper-focus should be considered 'compromised' though.'

It's like saying any PC game that doesn't have Tiago Sousa at the helm is unoptimized.
 
Last edited:
IIRC Death Stranding also performs very closely to 3060ti. Seems like PS centric games simply like the hardware. Maybe they exploit a specific strength of PS5 that shifts the balance towards there. Just a guess IMO.

I have no qualms with PS5 matching my 3070 or 3060ti.

IME with my 3060 and PS5, without using DLSS and not RT, in pure rasterized performance you will likely need a 3060ti fairly often to keep up or surpass a PS5, my 3060 is more of a toss-up. Even in a highly optimized PC title like Doom Eternal, I cannot maintain the same performance as the PS5 without using DLSS (not a loss mind you, looks better). So it's not that uncommon.

Death Stranding with DLSS + force aniso looks significantly better than the PS5 version btw, but goddamn that intermittent camera stutter bug.
 
IME with my 3060 and PS5, without using DLSS and not RT, in pure rasterized performance you will likely need a 3060ti fairly often to keep up or surpass a PS5, my 3060 is more of a toss-up. Even in a highly optimized PC title like Doom Eternal, I cannot maintain the same performance as the PS5 without using DLSS (not a loss mind you, looks better). So it's not that uncommon.

Death Stranding with DLSS + force aniso looks significantly better than the PS5 version btw, but goddamn that intermittent camera stutter bug.
If performance creep has happened that much already its worrying for the future. A 3060ti is 33% faster than a 5700xt and the generation has barely even started.
 
I'm sure they're not as optimized 'as they could be', in terms of a first party studio like ND with an exceeding long development time, budget and crunch, sure - I'm not so sure a game lacking quite that level of hyper-focus should be considered 'compromised' though.'
Yea there's definitely some spectrum involved here, and you're right to look at my choice of words under a lens.

I think where I want to go with this is that, if all games are going multiplatform on launch from now on, consoles that used to have: low cost, ease of use, as well as being able to 'punch above it's weight'; is going to eventually fall back to low cost and ease of use.

Without those hyper-focus level optimizations, the punching above their weight is just not going to happen.
 
If performance creep has happened that much already its worrying for the future. A 3060ti is 33% faster than a 5700xt and the generation has barely even started.

Even before launch DF was noting that the SX was basically performing like a RTX 2080. It's really not that much creep is what I'm saying though, it can vary significantly per title of course but they've always been in that ballpark.
 
Even before launch DF was noting that the SX was basically performing like a RTX 2080. It's really not that much creep is what I'm saying though, it can vary significantly per title of course but they've always been in that ballpark.
Series X performing like a 2080 is spot on though. Around launch time frame PS5 only performed like a 2080 in Valhalla which runs quite slowly on Nvidia GPUs. In most other games it ran like a 2070/5700xt.
 
Series X performing like a 2080 is spot on though. Around launch time frame PS5 only performed like a 2080 in Valhalla which runs quite slowly on Nvidia GPUs. In most other games it ran like a 2070/5700xt.

If XSX is ~2080 than PS5 being ~2070/Super is about right in a typical game but in some games like AC:Valhalla and Sony exclusives I expect PS5 to see a decent jump above that ~2070/Super base line.
 
In this game it’s almost a 100% jump.

But this game is a hot mess and an extreme outlier in performance terms.

From a raw throughputs perspective it really shouldn't be surprising at all that the PS5 is often playing in the 2080's league even before considering console optimisations (so a little slower than the 3060Ti but a little faster than the 3060). The biggest surprise for me this generation thus far has been how infrequently that seems to have been the case. And more still, how badly the Series X has underperformed which on paper should be playing up there pretty close to 2080Ti levels.
 
And the CPU is doing more stuff than the CPU on PS5. They use Oodle Kraken and they decompress on the CPU. Other stuff like 3d audio are done on the Tempest engine where are they done on the CPU or the GPU on PC? This is probably one of the reason with 3600x doesn't performing as good as the PS5 here out of better optimization on PS5 and thinner API.

Maybe part of the VRAM usage comes from the fact they need to tailor a streaming solution working on HDD too.
So you are saying some custom hardware on PS5 like custom I/O and audio units are actually reducing pressure on its CPU and greatly improving the performance of the game (compared to a hardware lacking both of those custom silicon blocks)? That's a bold claim.
 
But this game is a hot mess and an extreme outlier in performance terms.

From a raw throughputs perspective it really shouldn't be surprising at all that the PS5 is often playing in the 2080's league even before considering console optimisations (so a little slower than the 3060Ti but a little faster than the 3060). The biggest surprise for me this generation thus far has been how infrequently that seems to have been the case. And more still, how badly the Series X has underperformed which on paper should be playing up there pretty close to 2080Ti levels.
PS5 loses in every theoretical metric to a 2080 outside of fill rate IIRC. Turing also has better scheduling of FP/INT math.
 
PS5 loses in every theoretical metric to a 2080 outside of fill rate IIRC. Turing also has better scheduling of FP/INT math.

Nah they trade blows but on balance the PS5 GPU could be argued to be ahead on paper. Compared with the PS5 and using it's official boost clock as the basis it has:

77% of PS5's pixel fill rate
98% of PS5's compute and texture throughput
15% more geometry throughput
Equal memory bandwidth

Given the PS5'smemory bandwidth is shared with the CPU, and the 2080 can often run above it's rated boost clock, I guess we can see how the PS5 rarely seems to match the 2080's real world performance. But that doesn't really leave much room for console optimisations giving a relative performance boost.
 
Status
Not open for further replies.
Back
Top