Ratchet & Clank: Rift Apart [PS5, PC]

  • Thread starter Deleted member 7537
  • Start date
The technical improvement over the predecessor is clear but it absolutly needs ray tracing. Without ray tracing the shadows do look unnaturally hard. This damages the visual experience considerably.
A different variations of shadowing mapping could work decently for variable peumbras.
 
That is correct. Other games like The Last of Us also have nice soft shadows without ray tracing. But if I were to play Ratchet & Clank ray tracing would be mandatory since there is no alternative raster solution with soft shadows in this game.
 
NX Gamer video on R&C PC version. This guy is very good at what he does.

His 3080 averages 48fps in that first section and my 2080 Ti averages 50fps. His minimum is 23fps and the minimum on my 2080 Ti is 31fps. VRAM issue on the 3080's end perhaps? My 2080 Ti being 15% faster than the PS5 is normal. His 3080 being basically equal with my 2080 Ti is not.

02-10-2023, 22:59:04 RiftApart.exe benchmark completed, 7879 frames rendered in 157.859 s
Average framerate : 49.9 FPS
Minimum framerate : 31.7 FPS
Maximum framerate : 60.1 FPS


Also, the loadings are now faster than before in that portal sequence.
 
Last edited:
His 3080 averages 48fps in that first section and my 2080 Ti averages 50fps. His minimum is 23fps and the minimum on my 2080 Ti is 31fps. VRAM issue on the 3080's end perhaps? My 2080 Ti being 15% faster than the PS5 is normal. His 3080 being basically equal with my 2080 Ti is not.

02-10-2023, 22:59:04 RiftApart.exe benchmark completed, 7879 frames rendered in 157.859 s
Average framerate : 49.9 FPS
Minimum framerate : 31.7 FPS
Maximum framerate : 60.1 FPS


Also, the loadings are now faster than before in that portal sequence.

10GB is definitely not enough for very high textures in this game, at least not with everything else maxed out as I hit limits on 12GB in that case.

So even at lesser PS5 settings I do expect 10GB to be causing a performance bottleneck to some degree.
 
Hah, there is a specific spot in Torren IV location where 3070 gets GPU bound at 40 FPS with 1080p dlss ultra performance + low textures. Turns out it is a PCI-e limitation. Practically game uses 12 GB/s over pci-e and framerate tanks to 40. You look the other way, it gets reduced to 8 GB/s and framerate goes to 70s-80s. I'm sure this is still not the performance 3070 should get here, but that's another topic.

Of course, no problem, games should make smart use of PCI-e bandwidth when they're VRAM limited. Here's the funny part though: This is with low textures at 1080p with extreme upscaling. There's no VRAM limitation. There really is no reason game should keep relying on PCI-e. Makes no sense. There's a huge chunk of free DXGI budget to use. It really looks like game is hard coded to make transfers over PCI-e when you look that way. This should potentially affect everyone since high PCI-e utilization will stall any kind of GPU performance.

Nixxes ports are a true mystery. You can't even run away from their VRAM limitations even with extreme sacrifices on textures/settings. They just keep utilizing shared VRAM no matter what.

Good analysis, so this is what's preventing me from getting 60 FPS.

They only should use their shared memory system when the VRAM completely filled up, not when its enough. Hmm. Maybe it's a bug?
 

I don't think it will be as severe for you, as the combination of ddr5 bandwidth and Pcie-4 will mitigate this behaviour greatly. Though I guess you may still see some kind of performance delta.

I can manually change the PCIEX link speed on my motherboard all the way down to 2.0.

So it might get very interesting.
 
That area is definitely the most taxing in the game. I'll have to check it out on my PC with the 4090... but I know for a fact that that specific area is the one place which tanks performance on the Steam Deck.
 
not worth it



yadda yadda just watch the videos and come to your own conclusions I don't have the patience to explain things anymore since they're being misinterpreted
 
Last edited:
Or perhaps, it's just a limitation of the PC's memory system. Cyberpunk is a last gen game with last gen assets (RT doesn't change that), so it'd have much less to pull through the PCIe slot.

We really need unified memory on PC too. Hopefully Snapdragon will be for Windows what the M1 was for Apple.
 
not worth it
Now that is just childish. You've made a great post with solid deducations and deleted it for nothing, just because I have a different opinion of the PC platform as a whole than you and and added another point of how that could correlate to what we're seeing here. We're just interpreting data in different ways, nothing more. After all, with an unified memory approach like the consoles have, we would not have to worry about the PCIe bus not being wide enough to shuffle all the data through, as everything would be on the same SoC. And Remij's data confirms this as PCIe Gen 4 does not have this issue - simply because it can bruteforce through this problem.
 
Back
Top