Ratchet & Clank technical analysis *spawn

but the PS5 looks to outpace a 2080 when using RT here which is rarely, if ever seen.
I suspect what's happening here is that the RT implementation is very light. Considering it's almost certainly a very similar implementation to that in Spiderman
RT Reflections on the PS5 is of lower spatial resolution, that is checkerboarded, and temporally rendered as well, they also come from a very low resolution BVH structure as well, so indeed it's a very light form of RT reflections that is more suitable for console hardware.
 
DLSS is rendering at lower resolution then upscaling the image using AI and applying AA using AI as well, to produce an image that is near/equal to/better than native resolution + TAA, depending on the situation. DLSS is designed to boost fps primarily while minimizing or maintaining image quality.

DLAA is rendering the game at native resolution then applying AA using AI to boost the quality to near super sampling resolution, so it's always universally better than native + TAA. DLAA is designed to boost image quality at the expense of fps. DLAA is the ultimate form of DLSS.

In R&C, even DLSS is providing superior image to ITGI, so naturally DLAA is providing even better image than DLSS, at the cost of some performance (typically around 10% of fps).

You are introducing a flawed comparison within Alex's video. As you said, DLAA is rendering a native 4k image, while the performance mode is 1080p-1440p. That isn't showing the strength of DLAA, it's showing inherent quality of the higher native pixel count.

And if you followed my back and forth with davis, you see that we are comparing the non RT performance mode which typically renders at 1440p- 1800p (DF says it stays at 1800p most of the time) and uncapped goes up to 100fps vs the 3070 with a locked 1440p native. At high settings, the 3070 achieves 40-50fps. At medium settings, it struggles to hit 60.

PS5 is outperforming the 3070. Claiming DRS or DLAA as the reason for PS5 outperforming is ridiculous.
 
no 16GB is not enough if you don't have VRAM on your GPU also
not it's not loading small chunks of levels but the levels themselves, apart from the jump in the futuristic city on the flying dinosaur which is not from any level in the game, and that's why this small portion loads also faster on a HDD.
And it was always a given that with enough ram you could achieve similar results, but at a more expensive price.

16GB of DDR4-3200 is around $35 now. 32GB of DDR5-6000 is $100. Ram upgrades are by far the cheapest, easiest to install upgrade possible on a PC, outside of maybe a new keyboard. You already are at a far more expensive price than the PS5 to run any game at the equivalent quality with 16GB of main memory too, simply due to the cost of a comparable GPU/CPU.

Loading up on ram is precisely an advantage of the PC's architecture. There are definite advantages to the console's UMA too, but a game say, requiring 32GB on the PC to equate the texture detail or provide better performance than the console versions wouldn't be unfair at all, it would be properly utilizing the architecture. What you lose in unified fast memory, you gain in being able to cheaply adds tons of slower memory.

Now in terms of complementing vram, the utility of that is in question - Diablo IV seemed to indicate that was their goal, but so far it hasn't worked out. But in terms of replicating the storage speed of the PS5's solution, it seems clear now that at least with R&C, utilizing 32+GB of ram could have very well been a method, but for whatever reason that's not the approach Nixxes took.
 
You are introducing a flawed comparison within Alex's video. As you said, DLAA is rendering a native 4k image, while the performance mode is 1080p-1440p. That isn't showing the strength of DLAA, it's showing inherent quality of the higher native pixel count.

And if you followed my back and forth with davis, you see that we are comparing the non RT performance mode which typically renders at 1440p- 1800p (DF says it stays at 1800p most of the time) and uncapped goes up to 100fps vs the 3070 with a locked 1440p native. At high settings, the 3070 achieves 40-50fps. At medium settings, it struggles to hit 60.

PS5 is outperforming the 3070. Claiming DRS or DLAA as the reason for PS5 outperforming is ridiculous.
Where are you getting all this information from?
 
I'm not 'making excuses', I'm explaining that the situation we have right now is ultimately still pretty good for us as consumers. And again, calling this a 'terrible port' is just straight hyperbole. With such standards, you're unlikely to ever be happy with 95% of future demanding games. It's just not realistic, quite frankly.

I'm just a fan of having some perspective, that's all. I've got no special love for Sony or anything.

So crappy ports released years later is a pretty good situation for consumers?

And you are saying you aren't making excuses ?

I can forgive issues like these if they are day and date but 2 + years later isn't a forgivable deal. Esp not as full price.

This is a rare example where we get to see a more accurate gpu comparison. PS5 Performance non-RT Mode runs 1440p native 80-100fps with VRR. and is obliterating this 3070 coupled with a Ryzen 3600.



The Zen 2 processor in the ps5 is an 8 core 16 thread processor. The ryzen 5 3600 is only a 6core/12 thread processor. That's a loss of two cores and the ryzen 5 3600 is only clocked at 100mhz more depending on what the ps5 is doing with its variable clock rate.

A more accurate comparision would be to the ryzen 7 3700x which is 8 cores 16 threads at 3.6ghz.
 
(posting here instead of the DF thread where you originally asked due to that thread's current infestation)

Have you tried deleting the DirectStorage dll from the game files to force it to not use GPU Decompression (if indeed that works)?

Ok, here you go. And of course after rendering out the second video after 40 minutes I realized....wait, why in the hell did I record it with vsync? Duh. Well regardless, the difference is still apparent even if the absolute framerate would have been a more sensible metric. This at least has the advantage of showing disparity wrt to CPU usage when trying for the same peak framerate I guess.

Here's 2 scenes, with and without those DS dll's. PC was rebooted after each run to ensure nothing was cached. Very High Textures, High everything else, no RT. First mostly combat, then the familiar Rift sequence - very surprisingly how even with that sequence, DirectStorage only assisted in loading times very marginally, if even that.


Now here's a much smaller area, with High textures - I wanted to see if there was still a difference in an enclosed section and using a texture setting that still engages the Gdeflate-packed textures, but not the largest ones. Surprisingly, still a constant fps difference in favour of non-directstorage.


Now these aren't the most exhaustive tests and it's only on one GPU, without using RT - so I'm not vram pressured with my 12GB 3060. As TechReport notes, cards with lower vram usage suffer - but they do so in a linear fashion. If you run out of vram in this game, your performance will drop - but it doesn't really get those massive stutters that other games do, which is ideally how it should behave. Bear in mind this is not something entirely new that DirectStorage has brought, Doom Eternal also behaved this way, but it could mean if I did this test on an 8GB GPU I might see more stutters when not using DS.

Make of it what you will. Ideally as I expected before launch, the option to switch from CPU->GPU DS decompression would be a visible toggle in the game to take advantage of precisely this scenario - where you have the CPU headroom at your chosen settings to let it fully handle decompression but want to free up a few GPU cycles from the task. Hopefully this option will come in a future patch, or they (or MS, or Nvidia/AMD) may just optimize the GPU path further.

(This all of course assumes removing these dll's actually removes DirectStrorage support from the game entirely, it's possible an earlier version of DS without GPU decompression may be in Windows system folders and the game just defaults to that? Dunno.)
 
(posting here instead of the DF thread where you originally asked due to that thread's current infestation)



Ok, here you go. And of course after rendering out the second video after 40 minutes I realized....wait, why in the hell did I record it with vsync? Duh. Well regardless, the difference is still apparent even if the absolute framerate would have been a more sensible metric. This at least has the advantage of showing disparity wrt to CPU usage when trying for the same peak framerate I guess.

Here's 2 scenes, with and without those DS dll's. PC was rebooted after each run to ensure nothing was cached. Very High Textures, High everything else, no RT. First mostly combat, then the familiar Rift sequence - very surprisingly how even with that sequence, DirectStorage only assisted in loading times very marginally, if even that.


Now here's a much smaller area, with High textures - I wanted to see if there was still a difference in an enclosed section and using a texture setting that still engages the Gdeflate-packed textures, but not the largest ones. Surprisingly, still a constant fps difference in favour of non-directstorage.


Now these aren't the most exhaustive tests and it's only on one GPU, without using RT - so I'm not vram pressured with my 12GB 3060. As TechReport notes, cards with lower vram usage suffer - but they do so in a linear fashion. If you run out of vram in this game, your performance will drop - but it doesn't really get those massive stutters that other games do, which is ideally how it should behave. Bear in mind this is not something entirely new that DirectStorage has brought, Doom Eternal also behaved this way, but it could mean if I did this test on an 8GB GPU I might see more stutters when not using DS.

Make of it what you will. Ideally as I expected before launch, the option to switch from CPU->GPU DS decompression would be a visible toggle in the game to take advantage of precisely this scenario - where you have the CPU headroom at your chosen settings to let it fully handle decompression but want to free up a few GPU cycles from the task. Hopefully this option will come in a future patch, or they (or MS, or Nvidia/AMD) may just optimize the GPU path further.

(This all of course assumes removing these dll's actually removes DirectStrorage support from the game entirely, it's possible an earlier version of DS without GPU decompression may be in Windows system folders and the game just defaults to that? Dunno.)

Thanks for putting the work in around that. That's a pretty remarkable result even with vsync on. The first battle in the first video shows it best where not only is the framerate clearly higher without Direct Storage, but critically, the CPU utilisation is also noticeably higher which absolutely reinforces the theory that the decompression has fallen back on the CPU there. That said, the difference is only around 10% when you don't appear to be anywhere near CPU limited. So if that is the case, the streaming decompression requirements - at least in that sequence seem to be fairly light in this game and it would seem like a big win in this case to have a toggle to allow the CPU to do that work. This definitely warrants deeper investigation!
 
He says the PS5 runs 20% faster than his 2070, How is running 20% faster than RTX 2070 means it's closer to 2080Ti? the 2080Ti is 60% faster than 2070, and the 3080 is 2x times faster than a 2070, his math is all wrong.

To add some extra clarity for a certain someone who doesn't know how things work.relative-performance_3840-2160.png
 
The standard of conversation in this thread is inadequate. I've wasted 30 minutes trying to work out what's what and decided I can't. Juvenile platform remarks are being responded to with juvenile insults and I can't sort the good stuff from the bad. Best course of action I can see is nuking the past two pages. If posts after this warning don't turn things around, that'll be what happens.

If someone is being a childish platform troll, not responding leaves the opportunity for mods to clean up by removing their content. Responding means having to work through replies and edit and becomes too much work to make careful moderation viable.

If you are posting with a view to identifying winners and losers and waving triumphant flags, please leave this forum. We're just about collecting and discussing accurate data.
 
Last edited:
Shut all the noise above really quick haha gotta love it!

"PS5 running much closer to a 2080TI and maybe even a 3080"


View attachment 9294

Who are you trying to convince with these flawed arguments? You were flat out wrong about DLAA which you've just glossed over. You haven't even attempted to address flappys point about the big performance drop off if not resetting the PC between resolution changes (here's more info on the impact that can have btw). And then you have direct evidence of the game running on a much slower system in the same area at the same settings achieving better performance, and much better without DLAA. Your argument around that initial video has been thoroughly refuted.

With regards to your "new evidence". The tech Report graph is showing 1% lows at 1440p. i.e. the performance pain point. i.e. the exact place where the PS5 would be most likely to drop to the bottom of it's DRS window (1080p). In addition, that's at Max Settings which are above the PS5 equivalent settings and thus not directly comparable.

As to the NXG video, where to start? The fact that he's testing on a 2700x which is a slower CPU than in the PS5? Or the fact that he's testing with a SATA SSD in the most IO intensive section of the game and comparing GPU performance based on highest frame times which could very easily be caused by either CPU or IO bottlenecks? Or perhaps the fact that by his own measure of 20% faster on average he makes the wild claim that this would put the PS5 into 2080Ti (44% faster) or 3080 (100% faster) territory. Note those performance uplifts for those GPU's are supported by the same TPU benchmarks you use above.

Or perhaps we could talk about how he didn't mention what settings he used but if we assume he really did match the PC as closely as possible as per Alex's settings, then he would have been using Very High textures which are too large for the 2070's 8GB VRAM and thus cause a significant performance drop off (and again, likely frame time spikes when things overflow into system memory).

Again I draw your attention to the same benchmarks you linked above and specifically the 1440p RT 4060Ti 8GB vs 16GB results where the only difference is the 4060Ti is not limited by its 8GB VRAM. Here the 16GB card is 13% faster which would eat a long way into that 20% average lead he mentioned for the PS5, albeit with lower res textures but it's widely accepted that more available VRAM is a real advantage of the PS5 over 8GB cards so PC gamers should certainly have the expectation of lowering texture quality slightly.

min-fps-2560-1440.png


I do note he claims the PS5 is also running at a higher average resolution however this ignores the higher performance penalty (with corresponding higher image quality) of DLSS vs IGTI. Ultimately, if the 2070 is outputting as good or better image quality thanks to DLSS despite a lower internal resolution, then that's what matters from the end user experience perspective, and it's doing so by leveraging an architectural advantage that it holds over the PS5, certainly something that I would consider fair in a game that was literally built from the ground up to take full advantage of the PS5 architecture and then shoehorned onto the PC.

Also I'd just make one final point about his repeated disingenuous attempts to paint this port as some kind of panacea of quality and the best case optimisation scenario for the PC. It is not. RT doesn't work on AMD cards, it had to be hot fixed within a day of launch to fix major RT issues on NV cards, it has a few crashing issues. It's still a good port and I've little doubt it will become a great one after a few patches, but this is not in it's most optimised state, and just like the PS5 version of this game at launch, I expect it will receive more optimisation as time moves on.
 
Last edited:
Back
Top