Ratchet & Clank: Rift Apart [PS5, PC]

  • Thread starter Deleted member 7537
  • Start date
I'm not following. Why was this thread bumped? Anything new?
There's nothing new as in an update to the game. That will come with AMD's FSR3.1.. which I'm looking forward to testing out.. but for now, it's just an observation on how the game handles memory.

Yamaci17 posted a video showing how Ratchet can be PCIe bound in a certain area of the game due to how it handles memory, explaining why the framerate tanks on certain cards in that area on some setups.

He's basically not happy with how Ratchet handles memory because basically despite the card having ample VRAM, the game isn't making use of it and instead the PCIe bus is being thrashed, causing it to become the limitation.

Dampf made a comment about it just being a limitation of the architecture and that platforms like consoles with unified memory don't have this problem because they aren't copying data back and forth over the bus from RAM to VRAM. Yamaci17 got upset at it and deleted his posts, claiming it was being misrepresented.

For my part.. I don't think there's anything wrong with what Dampf said. It's simply a fact of the architecture. Games on PC have to transfer data over the PCIe bus from system memory to VRAM. In games that stress this, it can and will become limiting. However, I also agree with Yamaci17 in that his point was that there's really no reason for the game to do that when it's not VRAM limited. The game should intelligently handle memory so as to reduce the pressure on the PCIe bus. So in that sense, it's not really a limitation of the architecture.. it's an issue with how they've coded the game to handle memory.

Something common when it comes to these Playstation exclusives being ported to PC unless you can brute force it.
 
Last edited:
There's nothing new as in an update to the game. That will come with AMD's FSR3.1.. which I'm looking forward to testing out.. but for now, it's just an observation on how the game handles memory.

Yamaci17 posted a video showing how Ratchet can be PCIe bound in a certain area of the game due to how it handles memory, explaining why the framerate tanks on certain cards in that area on some setups.

He's basically not happy with how Ratchet handles memory because basically despite the card having ample VRAM, the game isn't making use of it and instead the PCIe bus is being thrashed, causing it to become the limitation.

Dampf made a comment about it just being a limitation of the architecture and that platforms like consoles with unified memory don't have this problem because they aren't copying data back and forth over the bus from RAM to VRAM. Yamaci17 got upset at it and deleted his posts, claiming it was being misrepresented.

For my part.. I don't think there's anything wrong with what Dampf said. It's simply a fact of the architecture. Games on PC have to transfer data over the PCIe bus from system memory to VRAM. In games that stress this, it can and will become limiting. However, I also agree with Yamaci17 in that his point was that there's really no reason for the game to do that when it's not VRAM limited. The game should intelligently handle memory so as to reduce the pressure on the PCIe bus. So in that sense, it's not really a limitation of the architecture.. it's an issue with how they've coded the game to handle memory.

Something common when it comes to these Playstation exclusives being ported to PC unless you can brute force it.

I don't think the issue with Dampfs post was the claim that the performance issue on a PCIe3 based PC could be down to the game originating on the consoles HUMA architecture, I was actually going to suggest the same myself and it's a logical conclusion to reach given the game is not VRAM limited but does appear to be PCIe limited anyway, i.e. it's the transfer of data between CPU and GPU for processing that is the culprit as opposed to the size and complexity of geometry and texture assets.

The issue with the post was the characterisation of the difference in architectures as a pure disadvantage and something that needs to be done away with, along with the use of Cyberpunk as evidence of that without any technical reasoning.

On the HUMA point, the first thing to note is that this is only a limitation on PCIe3 systems. i.e. systems that pre-date the PS5 itself. On current PCIe4 systems which have been available since before the PS5's launch, this is not a problem. And of course PCIe5 is also a thing now which doubles bandwidth again. So it's not an architectural issue, but rather an issue of using old bandwidth constrained components to run a modern game that expects modern components to be able to run at 60fps+. On the flip side split memory pools allow for a much more cost efficient way of achieving high volumes of memory where they are needed the most while also providing much better latency to the CPU - again where it's needed the most. So simply moving the PC to a HUMA architecture is far from the answer, especially given some more recent API advancements which if fully leveraged would allow a split memory pool architecture to behave more like a HUMA one while retaining the split pool advantages noted above.


Or perhaps, it's just a limitation of the PC's memory system. Cyberpunk is a last gen game with last gen assets (RT doesn't change that), so it'd have much less to pull through the PCIe slot.

We really need unified memory on PC too. Hopefully Snapdragon will be for Windows what the M1 was for Apple.

I really don't see what Cyberpunk has to do with this. What evidence do you have that R&C is pushing more geometry than CP in the first place, and even if it is, given that VRAM clearly isn't the problem here, what makes you think geometry density has anything to do with this anyway? If it did then surely that geometry should be streamed into VRAM for when it's needed if there is spare vram. And if that isn't happening when it needs to then that is absolutely an issue with the game itself as opposed to the architecture issue.

And for the record, characterising CP2077 as a "last gen game" is crazy man. I'm playing it right now, I recently completed R&C and my wife is currently playing Forbidden West. As much as I admire the graphics on the latter two, CP (with full PT) is in a completely different league IMO.
 
Last edited:
There's no need to say IMO that's a fact. What Cyberpunk 2077 achieved with PT was revolutionary and unprecedented in games.

The PC certainly doesn't need unified memory. Buying the CPU and graphics card separately is an advantage for the PC. People talk about the price of PC-gaming being too high but if you have to change everything every time it becomes more expensive. A high price for what benefit?
 
There's no need to say IMO that's a fact. What Cyberpunk 2077 achieved with PT was revolutionary and unprecedented in games.

The PC certainly doesn't need unified memory. Buying the CPU and graphics card separately is an advantage for the PC. People talk about the price of PC-gaming being too high but if you have to change everything every time it becomes more expensive. A high price for what benefit?
Can't they make a unified pool of memory even with being able to change CPU and GPU? Is it really not technically possible?
 
You can clearly see game is not using any PCI-E bandwidth on your end at all. What is happening on your end has nothing to do with PCI-e revisions. The problem is not fixed with pcie 4 on your end, it is fixed with a larger VRAM buffer and game changing behaviours for it. For me, game uses 2.6 gb of shared vram no matter what settings I use. On your end, it doesn't use as much (1 gb or so) and doesn't even transfer that much. This pretty much proves if I put a 24 GB card to my PCI-e 3 system, I'd get similar performance results as you. It seems like their memory manager is hard coded to utilize that bit of shared VRAM on 8 GB cards regardless of settings. That is the problem. It should only do this when VRAM constrained. Not when you're not VRAM constrained at 800p dlss ultra performance with low textures;

kSCO8yo.png

8jl5rP8.png



xsSamyh.png


Reason I'm getting fed up because I'm shocked to see anyone can have the audacity to rationalize this. You can understand this happening if you're pushing LOCAL VRAM budget to its limits. The game looks like garbage at these settings. It is merely using 3 GB of local memory at these settings. The game is simply hard coded to destroy performance over PCI-e trashing on 8 GB cards no matter what settings you choose.

QmoExbt.png


CuPD13F.png


The game doesn't even use more than 2.5/3 GB/s bandwidth over your end. If you want to prove my point, go look at your BIOS settings and if you can set your mobo to pcie 3, try that. I'm sure you will see the same good performance even with pcie 3. If not, I will admit defeat and retreat my position and apologise to everyone if you go out and show me that you get much worse performance with pcie 3 on your end. I legit will do this and take the L and move on. Just do it, if you can, please. I implore you.

And for the record, characterising CP2077 as a "last gen game" is crazy man. I'm playing it right now, I recently completed R&C and my wife is currently playing Forbidden West. As much as I admire the graphics on the latter two, CP (with full PT) is in a completely different league IMO.
Aside from that, I tested Phantom Liberty in that instance, which is actually the nextgen part of Cyberpunk. It is not released on lastgen systems, it has much higher VRAM demands than the base game, much more dense geometry, much higher quality textures. Much higher requirements than the base game too. That doesn't matter though, ratchet at 800p with low settings must look better than Cyberpunk running at high settings with ray tracing, apparently. Considering former feels the need of using PCI-e despite having free VRAM.

I can download alan wake 2 and prove that such a behaviour does not happen in that "nextgen" game. Though I'm pretty sure some other excuse will be brought up for that one as well. So I don't see the point of doing that unless someone really asks me to do it.
 
Last edited:
You can clearly see game is not using any PCI-E bandwidth on your end at all. What is happening on your end has nothing to do with PCI-e revisions. The problem is not fixed with pcie 4 on your end, it is fixed with a larger VRAM buffer and game changing behaviours for it. For me, game uses 2.6 gb of shared vram no matter what settings I use. On your end, it doesn't use as much (1 gb or so) and doesn't even transfer that much. This pretty much proves if I put a 24 GB card to my PCI-e 3 system, I'd get similar performance results as you. It seems like their memory manager is hard coded to utilize that bit of shared VRAM on 8 GB cards regardless of settings. That is the problem. It should only do this when VRAM constrained. Not when you're not VRAM constrained at 800p dlss ultra performance with low textures;

kSCO8yo.png

8jl5rP8.png



xsSamyh.png


Reason I'm getting fed up because I'm shocked to see anyone can have the audacity to rationalize this. You can understand this happening if you're pushing LOCAL VRAM budget to its limits. The game looks like garbage at these settings. It is merely using 3 GB of local memory at these settings. The game is simply hard coded to destroy performance over PCI-e trashing on 8 GB cards no matter what settings you choose.

QmoExbt.png


CuPD13F.png


The game doesn't even use more than 2.5/3 GB/s bandwidth over your end. If you want to prove my point, go look at your BIOS settings and if you can set your mobo to pcie 3, try that. I'm sure you will see the same good performance even with pcie 3. If not, I will admit defeat and retreat my position and apologise to everyone if you go out and show me that you get much worse performance with pcie 3 on your end. I legit will do this and take the L and move on. Just do it, if you can, please. I implore you.


Aside from that, I tested Phantom Liberty in that instance, which is actually the nextgen part of Cyberpunk. It is not released on lastgen systems, it has much higher VRAM demands than the base game, much more dense geometry, much higher quality textures. Much higher requirements than the base game too. That doesn't matter though, ratchet at 800p with low settings must look better than Cyberpunk running at high settings with ray tracing, apparently. Considering former feels the need of using PCI-e despite having free VRAM.

I can download alan wake 2 and prove that such a behaviour does not happen in that "nextgen" game. Though I'm pretty sure some other excuse will be brought up for that one as well. So I don't see the point of doing that unless someone really asks me to do it.
Before I post my video, just quickly I want to say that I never said you were wrong! Nobody was out to prove you wrong either! You posted that it was a PCIe limitation (which it is on your system) by ensuring VRAM was not a bottleneck. My first post was merely to provide another data point.. and now with this new video from me, we have the information needed to prove your assumption correct. It's due to how the game is coded to handle memory with different VRAM budgets.


So I agree with you. The game needlessly thrashes the PCIe bus on cards with lower VRAM capacities regardless of whether VRAM is a limitation or not.
 
I just arrived at that spot and it completely trashes performance on my system too. Can't hold 30 FPS any longer on my settings. Blizar Prime and the beginning of this planet where you can view all of the impressive geometry at once was completely fine, but for some reason, this specific spot decides just to completely tank it.

@yamaci17 have you posted your data to Nixxes on Twitter or their support?
 
I reported this specific place being a "sudden" performance problem long ago to both NVIDIA and Nixxes through their feedback systems. However at the time, I wasn't particularly aware that it was due to PCI-e usage. However this spot is not the only problematic place, there is also a place towards the end game where you go up a building through a large lift platform. That place also has performance quirks like this.

Another weird thing with this game is that if you force resizable bar through Inspector, the game initially works fine. When you exit the game, the whole system freezes and I have to hard shutdown the PC. And do you know the funny part is? This behaviour continues even if you physically disable resizable bar. Which means resizable bar flag must be doing something even when resizable bar is physically disabled. This is even more intriguing, LOL.

Only thing they fixed that I happened to report was the FPS drops when shooting with shotguns or heavy weapons with 8 GB VRAM cards (yes, that problem was specific and exclusive to 8 GB cards as well). It seems like their overreliance of having a shared VRAM cache on VRAM limited platforms is a huge burden that creates random problems here and there. Look, I understand the need of shared VRAM being there, but it shouldn't rely on it like that for people that just want to play at 1080p/medium. It realistically is beneficial if you try to push something like 1440p/ray tracing, then you kind of see the merit and value behind such a system. Do they assume people with 8 GB cards will play with VRAM intensive resolutions/settings/ray tracing etc.? That's a big assumption.

And do you know how I came to discover this problem? I took a personal L, told to myself: "Okay man, this game needs 10 GB VRAM per PS5 spec. Forget about 1440p, forget about ray tracing. I will just play it at 1080p, DLSS, medium preset, and hopefully I should experience no <60 FPS drops then". And then I came across this region, and then that giant Seekerpeede, shotgun issue. So I gave up on the game back then. Only now I discover why all of this happens. Regardless, I don't think it will be fixed, it will be almost 1 year since the game is released. I don't even look forward to any fixes tbh. I just thought it would be cool to share my findings and discuss some stuff about it. Now that we have reached some kind of understanding, and my mind is cleared, I'm, regardless, sorry for being bitter. I just felt disrespected, I just personally cannot rationalize whatever their engine is doing at 1080p/low/medium settings. Even the RTX 4060 is limited to 16 GB/s bandwidth (being a pcie 4 8x gpu). I just think whatever they're doing is unsustainable. Imagine if this game had 2x more GPU demands. It would've been even more catastrophic. There's clearly something wrong here, wrong assumptions, pre defined rules that engine does not stray from. And I don't necessarily think that it has to be, now that we've seen game detects a big buffer (24 GB VRAM) it doesn't put anything to shared VRAM and it doesn't use anything from it. Technically, there stands no reason it should not behave like that at 1080p/medium on a 8 GB card. The textures look horrendous at medium preset. It is already a big L for anyone who has to use them. But at least let people have good performance then.
 
Last edited:
I reported this specific place being a "sudden" performance problem long ago to both NVIDIA and Nixxes through their feedback systems. However at the time, I wasn't particularly aware that it was due to PCI-e usage. However this spot is not the only problematic place, there is also a place towards the end game where you go up a building through a large lift platform. That place also has performance quirks like this.

Another weird thing with this game is that if you force resizable bar through Inspector, the game initially works fine. When you exit the game, the whole system freezes and I have to hard shutdown the PC. And do you know the funny part is? This behaviour continues even if you physically disable resizable bar. Which means resizable bar flag must be doing something even when resizable bar is physically disabled. This is even more intriguing, LOL.

Only thing they fixed that I happened to report was the FPS drops when shooting with shotguns or heavy weapons with 8 GB VRAM cards (yes, that problem was specific and exclusive to 8 GB cards as well). It seems like their overreliance of having a shared VRAM cache on VRAM limited platforms is a huge burden that creates random problems here and there. Look, I understand the need of shared VRAM being there, but it shouldn't rely on it like that for people that just want to play at 1080p/medium. It realistically is beneficial if you try to push something like 1440p/ray tracing, then you kind of see the merit and value behind such a system. Do they assume people with 8 GB cards will play with VRAM intensive resolutions/settings/ray tracing etc.? That's a big assumption.

And do you know how I came to discover this problem? I took a personal L, told to myself: "Okay man, this game needs 10 GB VRAM per PS5 spec. Forget about 1440p, forget about ray tracing. I will just play it at 1080p, DLSS, medium preset, and hopefully I should experience no <60 FPS drops then". And then I came across this region, and then that giant Seekerpeede, shotgun issue. So I gave up on the game back then. Only now I discover why all of this happens. Regardless, I don't think it will be fixed, it will be almost 1 year since the game is released. I don't even look forward to any fixes tbh. I just thought it would be cool to share my findings and discuss some stuff about it. Now that we have reached some kind of understanding, and my mind is cleared, I'm, regardless, sorry for being bitter. I just felt disrespected, I just personally cannot rationalize whatever their engine is doing at 1080p/low/medium settings. Even the RTX 4060 is limited to 16 GB/s bandwidth (being a pcie 4 8x gpu). I just think whatever they're doing is unsustainable. Imagine if this game had 2x more GPU demands. It would've been even more catastrophic. There's clearly something wrong here, wrong assumptions, pre defined rules that engine does not stray from. And I don't necessarily think that it has to be, now that we've seen game detects a big buffer (24 GB VRAM) it doesn't put anything to shared VRAM and it doesn't use anything from it. Technically, there stands no reason it should not behave like that at 1080p/medium on a 8 GB card. The textures look horrendous at medium preset. It is already a big L for anyone who has to use them. But at least let people have good performance then.
Did you have the occasion to test if Forbidden West exhibits the same issue? For all the praise Nixxes get for their ports, I don't feel like they run all that well compared to the PS5. I'm not expecting parity but I scratch my head when cards 30% faster end up equal or sometimes even slower despite having 10GB of VRAM.

Lookin at the Ghost of Tsushima requirements has me raising an eyebrow as well. An RTX 4080 for 4K60 for a game that runs at 1920x2160 60fs on a PS5?
 
Did you have the occasion to test if Forbidden West exhibits the same issue? For all the praise Nixxes get for their ports, I don't feel like they run all that well compared to the PS5. I'm not expecting parity but I scratch my head when cards 30% faster end up equal or sometimes even slower despite having 10GB of VRAM.

Lookin at the Ghost of Tsushima requirements has me raising an eyebrow as well. An RTX 4080 for 4K60 for a game that runs at 1920x2160 60fs on a PS5?
I only tested the initial tutorial area of Forbidden West and the game already cahed 2 GB or so shared VRAM there (with 5-6 gb/s pci-e transfer). I'm sure the game gets more VRAM demanding as you advance and will utilize even more and Burning Shores probably destroys it completely (I've seen many reports about it). Even worse is, NVIDIA has enabled resizable bar for the game which reduces performance even further for 8 GB cards and people now has to go and manually disable it for the game to claw back some vram/performance (this is how much NVIDIA is disconnected with the reality of 8 GB not being enough)

I didn't bother much with it this time. I do want to give the DLC a try though.

This was after doing 2 cycles of running around this super limited area. It already cached a lot of data to shared VRAM and caused a performance tank. This is 1440p dlss balanced at high settings (ps5 equivalent). PS5 can run this game checkerboarded 1800p 60 FPS (so around 2.9 mil pixels. and if you also factor in the checkerboarding overhead, one could say PS5 would be able to push around native 1300p/60 FPS in worst case scenario)

Here however 3070 tanking to 56 FPS at 836p render resolution. There's not much to say about it. These ports suck, at least for me. Good for people that have 16-24 GB VRAM cards. At least they're not stuttery messes etc. (if you're not VRAM limited that is. I get stutters pretty much all the time due to constant PCI-e transfers. unlike some believe, windows systems are not optimized for such transfers and I get random stutters all the time with poor %1 lows. that is why pci-e is not a solution.)

49EBc0Q.jpeg


RqHL9Pf.jpeg


Funny isn't it? Being able to lose %25 effective performance by just running back and forward. And this happening at 836p. Not much to say.

Disclaimer: Higher FPS picture has been taken later than the lower FPS one because I restarted the game to see what my actual performance should be like It goes like this

İnitial launch: 69 FPS
Walk around, come to same place: 56 FPS
Restart: 69 FPS
 
Last edited:
I only tested the initial tutorial area of Forbidden West and the game already cahed 2 GB or so shared VRAM there (with 5-6 gb/s pci-e transfer). I'm sure the game gets more VRAM demanding as you advance and will utilize even more and Burning Shores probably destroys it completely (I've seen many reports about it). Even worse is, NVIDIA has enabled resizable bar for the game which reduces performance even further for 8 GB cards and people now has to go and manually disable it for the game to claw back some vram/performance (this is how much NVIDIA is disconnected with the reality of 8 GB not being enough)

I didn't bother much with it this time. I do want to give the DLC a try though.

This was after doing 2 cycles of running around this super limited area. It already cached a lot of data to shared VRAM and caused a performance tank. This is 1440p dlss balanced at high settings (ps5 equivalent). PS5 can run this game checkerboarded 1800p 60 FPS (so around 2.9 mil pixels. and if you also factor in the checkerboarding overhead, one could say PS5 would be able to push around native 1300p/60 FPS in worst case scenario)

Here however 3070 tanking to 56 FPS at 836p render resolution. There's not much to say about it. These ports suck, at least for me. Good for people that have 16-24 GB VRAM cards. At least they're not stuttery messes etc. (if you're not VRAM limited that is. I get stutters pretty much all the time due to constant PCI-e transfers. unlike some believe, windows systems are not optimized for such transfers and I get random stutters all the time with poor %1 lows. that is why pci-e is not a solution.)

49EBc0Q.jpeg


RqHL9Pf.jpeg


Funny isn't it? Being able to lose %25 effective performance by just running back and forward. And this happening at 836p. Not much to say.

Disclaimer: Higher FPS picture has been taken later than the lower FPS one because I restarted the game to see what my actual performance should be like It goes like this

İnitial launch: 69 FPS
Walk around, come to same place: 56 FPS
Restart: 69 FPS

Forbidden West is horrible with VRAM. The game runs silky smooth at a locked 60fps at max settings (averaging 70's unlocked) most of the time on my 4070Ti 12GB, then in cut scenes and some select gameplay areas, it will buckle completely, sometimes into the 20's! Turning texture res down from very high to high completely resolves this and we're back at a silky smooth 60. This is at 3860x1600 DLSSQ which is roughly the same resolution as PS5's Performance mode, although the wide screen aspect ratio means more world is visible on screen which I guess might increase texture requirements?

Problem is, to avoid having to do that I have to leave the texture setting at High. I can't really notice a difference, but it does make we wonder why games don't simply have a "dynamic texture detail" setting which auto switches down when it detects VRAM overflow. I also wonder if consoles have this manually built in, similar to how R&C on PS5 hand enables/disables RT in select areas of the game where the hardware can handle it.
 
Back
Top