Ratchet & Clank technical analysis *spawn

We don't need to "wait and see" to prove that gpu decompression only provides a benefit if you don't have enough frame time to run cpu decompression. That fact doesn't make directstorage good or bad. Tech does specific things, if we aren't basing our understanding on what it actually does our analysis will always be terrible. A purely wait and see approach had humans thinking the sun rotated around the earth for hundreds of years.

I really didn't take it that way, the impression I got was SG's post was simply trying to lower the temp in this thread and not declare something as an outright failure based on one buggy game's first implementation of it, especially when it's a technology (GPU decompression) that requires the coordination of developers, GPU hardware/drivers, and OS architecture. There's a lot of moving parts here. I really don't think he's calling for us to return to unleavened bread.

That being said, it's seems clear enough at this point that the least Nixxes could be doing to help the performance in this aspect is working on adding a CPU-only Directstorage option for future patches. If the rumours of the Raptor Lake refresh and Zen5 introducing their own 'little' Zen5C cores coming along are true, one thing is clear - we won't be CPU core starved anytime soon. Put those little fuckers to work.
 
I think we all basically agree here, the benefits of moving decompression to the gpu (which are also obvious!) will not be determined by one mostly gpu bound game.

The idea is mostly to shift work to the CPU and/or avoid a PCIE bottleneck. If neither is happening you don't get any benefit. Ratchet and Clank is 100% GPU bound or reasonably systems and on PCIE 4 runs fine. Maybe if someone compares artificially limited PCIE or SSD speeds we'd see a boost. But Direct Storage 1.2 isn't meant to do anything magical.
 
rJwbPns.png

KC0WEPL.jpg


9Pkz67f.png


EO1bj6V.png

Edit: No RT though. Doesn't work :(
 
Last edited:
So glad they fixed the texture streaming issue. I got the email notification about an update to my support ticked regarding it, and they mentioned the new update fixed it.

Very happy.. as the game looks simply stunning as it should have now, at all times lol.

This port is in a pretty damn excellent place right now, especially if you have good hardware for it. Quite the experience!

That said, there's still a few remaining issues.

-Dualsense features don't work quite like they do on PS5. A comparison video tipped me off that for example when you use the Burst Pistol and pull past halfway to engage rapid fire, the trigger doesn't "kick back".. which it does on PS5.
-Water ripple effect when swimming in water
-Some audio issues with guns either missing or appearing extremely low at times.


But other than those minor things, they've already fixed most of the major issues I had with this game.
 
-Dualsense features don't work quite like they do on PS5. A comparison video tipped me off that for example when you use the Burst Pistol and pull past halfway to engage rapid fire, the trigger doesn't "kick back".. which it does on PS5.

Yeah, in general just less 'oopmh'. I notice this in the Spiderman games too when compared to the PS5, the strength of the effects are noticeably weaker.

-Water ripple effect when swimming in water
-Some audio issues with guns either missing or appearing extremely low at times.

But other than those minor things, they've already fixed most of the major issues I had with this game.

I'd say it's pretty good, but they really need to add that CPU-only Directstorage option. People are losing frames they don't have to who don't know about deleting dll's.
 
Yeah, in general just less 'oopmh'. I notice this in the Spiderman games too when compared to the PS5, the strength of the effects are noticeably weaker.



I'd say it's pretty good, but they really need to add that CPU-only Directstorage option. People are losing frames they don't have to who don't know about deleting dll's.
Yeah.. I wonder if it's just a limitation of the current PC implementation of Dualsense features. Would likely bet money on that. Why the hell can't Sony just make official drivers for the damn thing?....

I think an actual option to toggle "DirectStorage - on/off" would be a good idea. We need a better understanding of what's happening here with this game.
 
Apparently even 12GB of 3080Ti isn't enough at 4k + RT. DLSS not helping much with VRAM usage.



Only watched the first few seconds of that but noted he didn't reset the game after changing DLSS level. That's a rookie mistake in this game as most major changes negatively impact frame rate (to a large extent) without a game restart. That includes anything resolution related like upscaling, and I've noticed it with RT too. Granted I'm only running at 3840x1600 but I've not had any VRAM issues on the 4070ti. Max everything, DLSS balanced, locked 90fps most of the time.
 
PCGH benchmarks.

The numbers with rasterization seem to suggest the PS5 really is overperforming to a notable degree. The ultra settings outside of RT don't change the performance profile much going by Alex's video.
 
PCGH benchmarks.

The numbers with rasterization seem to suggest the PS5 really is overperforming to a notable degree. The ultra settings outside of RT don't change the performance profile much going by Alex's video.

I think it's quite difficult to determine from these benchmarks. DRS can make an enormous difference so without pixel counted matching scenes it's almost impossible to compare performance. PS5 performance mode runs locked at 60 fps so we can take that to be the minimum and while DF reports the typical DRS window of 1620p-1800p we know it can drop as low as 1080p. If it is dropping that low in the scene PCGH tested (which they claimed to be a heavier than average scene on the GPU so it stands to reason it would sit lower down the DRS window on PS5) then the results roughly align with what we'd expect.

If we ignore the 8GB results as the benchmarks show those to be massively VRAM limited at those settings, then the first GPU we hit is the 6700 10GB which is roughly at the same performance level as a 2070S according to TPU or roughly what we would expect to be PS5 level. It's getting 65 average with 53 min at 1080p max raster settings and it should get a decent performance boost from running at PS5 settings vs max.

We could make the assumption that PS5 is running at a higher res like 1620p which is just a little less than their tested 3440x1440 resolution but in that scenario you're talking closer to 7900XT/3090Ti/4070Ti levels of performance. Which seems a tad unrealistic.... And that's before we even consider the max DRS window on PS5 (1800p) which would see the PS5 approaching 7900XTX / 4080 levels of performance lol.

Or to put it more simply, knowing the real resolution that the console is running at is absolutely critical to any such comparison.
 
Just to add to the above, I only have a 4070Ti so like for like testing is pretty difficult. But I tested this scene on my system at Alex's PS5 matched settings for RT Performance mode:


The Perf RT mode on PS5 runs within a DRS window of 1080p-1440p but this is a pretty heavy scene so we can assume it's probably not at the top end of that. Nevertheless I ran at a fixed 1440p and was seeing around 110fps when just standing at the very start of that scene with only around 80% GPU utilisation. So around 75% more performance than PS5 there with GPU headroom to spare at a potentially higher resolution to boot.

TPU only rates the 4070Ti at 96% faster than the 6650XT which is a pretty equivalent GPU to the PS5 so that looks to align pretty reasonably to me.

In terms of why I'm not hitting full GPU utilisation I'm not really sure tbh. I absolutely can at higher settings and I'm not CPU limited in the slightest at these PS5 settings so there's a bottleneck somewhere else. I'm on a fast NVMe so it's not that. And the game seems to cap performance at 116fps so it's not that either. I wonder if at such high framerates the PCIe interface becomes the bottlneck?

EDIT: Adding to the above, I upped the resolution from 1440p to 2560x1600 (but retaining 16:9 aspect ratio) and the frame rate didn't change at all, but GPU utilisation went up to about 90%. So I upped it again to 3840x1600 (again 16:9) and finally got to full GPU utilisation with frame rates dropping to around 95fps in that same scene. So that's around 53% more fps at 67-196% more resolution depending on what part of it's DRS window the PS5 is operating in.

So it's safe to say the PS5 isn't operating anywhere near 4070Ti level or any other 3080-3090 class GPU here.
 
Last edited:
PCGH benchmarks.

The numbers with rasterization seem to suggest the PS5 really is overperforming to a notable degree. The ultra settings outside of RT don't change the performance profile much going by Alex's video.
They make an enormous difference.


This guide's optimized settings perform 35% better than the Very High preset and are much closer to PS5's performance mode settings than they are to Very High. That's without including DRS which can transform that 35% to 50% depending on the scene.
 
They make an enormous difference.


This guide's optimized settings perform 35% better than the Very High preset and are much closer to PS5's performance mode settings than they are to Very High. That's without including DRS which can transform that 35% to 50% depending on the scene.
Is that 35% with raster only settings?

Edit- just watched so I see that it is.

Just to add to the above, I only have a 4070Ti so like for like testing is pretty difficult. But I tested this scene on my system at Alex's PS5 matched settings for RT Performance mode:


The Perf RT mode on PS5 runs within a DRS window of 1080p-1440p but this is a pretty heavy scene so we can assume it's probably not at the top end of that. Nevertheless I ran at a fixed 1440p and was seeing around 110fps when just standing at the very start of that scene with only around 80% GPU utilisation. So around 75% more performance than PS5 there with GPU headroom to spare at a potentially higher resolution to boot.

TPU only rates the 4070Ti at 96% faster than the 6650XT which is a pretty equivalent GPU to the PS5 so that looks to align pretty reasonably to me.

In terms of why I'm not hitting full GPU utilisation I'm not really sure tbh. I absolutely can at higher settings and I'm not CPU limited in the slightest at these PS5 settings so there's a bottleneck somewhere else. I'm on a fast NVMe so it's not that. And the game seems to cap performance at 116fps so it's not that either. I wonder if at such high framerates the PCIe interface becomes the bottlneck?

EDIT: Adding to the above, I upped the resolution from 1440p to 2560x1600 (but retaining 16:9 aspect ratio) and the frame rate didn't change at all, but GPU utilisation went up to about 90%. So I upped it again to 3840x1600 (again 16:9) and finally got to full GPU utilisation with frame rates dropping to around 95fps in that same scene. So that's around 53% more fps at 67-196% more resolution depending on what part of it's DRS window the PS5 is operating in.

So it's safe to say the PS5 isn't operating anywhere near 4070Ti level or any other 3080-3090 class GPU here.
I was looking at 1440p numbers and performance seemed 4060ti 16GB level.
 
Last edited:
It would be nice if they allowed a switch between Direct Storage 1.1 and 1.2. If you're not PCIE or CPU bound 1.2 is just a straight performance hit, as it is here for many.
 
Think I discovered another bug. Not sure if it relates to improper mip levels or just sharpening not being set properly on fresh game load.

Was jumping back and forth between the same scenes on my PC/PS5 and was struck by how much sharper it looked on the PS5 Performance mode compared to DLSS performance, when previously I felt they were very comparable in clarity. So jumped up to DLSS quality, looked much better - then jumped back down to Performance again...and it looked nearly identical, as I remember it looking. Reloaded the game, and the same scene was back to being blurry.

Comparison here. Pay attention to the stone detail under the poster. Just fyi when you load up the game, you may have to swap your DLSS setting and back again to get the sharpness you expect.

Another comparison image.

Edit: Yep, it's a mipmap issue. Nothing to do with sharpening.

*Also notice this can increase vram - I get over 10GB of dedicated vram, playing 4K, DLSS performance optimized settings (no RT). Perhaps this is at least party due to the problem of re-loading the game fixing 'performance' issues after you play with graphics settings - it may fix it just by setting the wrong mipmap levels on boot, leading to reduced vram?
 
Last edited:
Ok, the mip issue seems pretty big, as it balloons in-use vram usage by ~800mb-1GB when you 'fix' it by swapping DLSS levels. It doesn't matter what DLSS level you have it set to when you boot up the game, you have to temporarily change it to any other level to fix - and yes, that even includes DLAA. It's extremely minor with DLAA, but I can notice a difference when you start with DLAA, switch to TAA, then back. But certainly far more prominent with any DLSS level. This also affects FSR.

This means that potentially any benchmark that was run after game load and didn't have any settings change could be flawed, as it's running at a lower LOD than what it should - this then includes all my DirectStorage on/off captures of course. You're 'gaining' performance by reloading the game because it's using almost a gig less of vram and drawing in less detailed models.

Also playing for a while after I 'fix' the mip levels, running 4k DLSS performance with a mix of high/medium settings and max textures, it looks even better than before - but I also get the odd vram thrashing in spots. So...yikes. I'll have to test with DS on/off again to see if that affects this, albeit could need to reboot as well so can't say 12GB is actually restrictive at this point, need more testing. Would not be good though to be vram limited without even using RT on a 12GB card.

But at the very least, in the interim everyone should change swap their dlss/fsr setting after booting up the game, otherwise you're getting a noticeably degraded image.
 
Last edited:
Ok, the mip issue seems pretty big, as it balloons in-use vram usage by ~800mb-1GB when you 'fix' it by swapping DLSS levels. It doesn't matter what DLSS level you have it set to when you boot up the game, you have to temporarily change it to any other level to fix - and yes, that even includes DLAA. It's extremely minor with DLAA, but I can notice a difference when you start with DLAA, switch to TAA, then back. But certainly far more prominent with any DLSS level. This also affects FSR.

This means that potentially any benchmark that was run after game load and didn't have any settings change could be flawed, as it's running at a lower LOD than what it should - this then includes all my DirectStorage on/off captures of course. You're 'gaining' performance by reloading the game because it's using almost a gig less of vram and drawing in less detailed models.

Also playing for a while after I 'fix' the mip levels, running 4k DLSS performance with a mix of high/medium settings, it looks even better than before - but I also get the odd vram thrashing in spots. So...yikes. I'll have to test with DS on/off again to see if that affects this, albeit could need to reboot as well so can't say 12GB is actually restrictive at this point, need more testing. Would not be good though to be vram limited without even using RT on a 12GB card.

But at the very least, in the interim everyone should change swap their dlss/fsr setting after booting up the game, otherwise you're getting a noticeably degraded image.

Interesting find. I had noticed some performance inconsistencies myself after restarts and even full system reboots which I was putting down to possible background texture decompression but it looks like that's not the case. This would also explain why I was consistently seeing lower performance after changing DLSS levels which I put down to just bugged setting changes like resolution.

I'll have to have a look myself later.
 
Back
Top