The Last of Us, Part 1 Remaster Remaster [PS5, PC]

I've just got the University.

The more I play it the more I understand why it's using so much memory, there's just so many unique assets used, I went in to a large house and really struggled to find instances where an asset was used more than once.
Yeah the asset variety is just nuts with ND's games. Its as if every part and object was handled separately and with special care.
The question is, how did they manage to fit everything in a PS5 (or a PS4 for that matter for TLOUS2) and the game demands more memory on PC?
 
  • 4070ti: 504.2GB/s
  • 3090: 936.2GB/s
  • 3090ti: 1TB/s
Even the 3070ti has more bandwidth at 608.3GB/s

Now granted the 4070ti has some slight architectural improvements over the 3000 series regarding bandwidth efficiency and some additional cache but those improvements will only get you so far.

The 3090ti's actual amount of bandwidth is 1008GB/s, literally double the 4070ti, cache isn't going to get close to making up for such a massive drop in bandwidth, the 3090 is the same, huge bandwidth advantage.

Yes but Ada gets by on far less bandwidth due mainly to its cache structure. Look at the 4090 outperforming the 3090Ti by 60% or more on the same bandwidth. While the 4080 obliterates the 3090 at all resolutions with much less bandwidth.

I might have missed something but everything I've seen to date outside of TLOU with its well documented high VRAM requirements has seen the 4070Ti maintaining its relative performance position vis a vis the 3090 and 3090Ti at all resolutions with no significant drop off at 4k. Certainly not to the extreme extent we're seeing here in TLOU anyway. I'd also wager that based on this, the 4070ti would see a significant performance boost at 4k simply by dropping texture quality to High. At least in the TPU benchmark run. Other areas might of course have different VRAM profiles.
 
It's because of the VRAM differences. Look at the 12GB 3060 outperforming the 8GB 3070 there. Event the 4070Ti's 12GB is limiting it at 4K.
... I feel like i"m being attacked here.

stop guys, yes, I know I made a mistake paying the same price for a 3070 8GB that I could get a 7900XTX today.

edit: /s I don't actually feel attacked. But yes, I shouldn't have bought a 8GB 3070
 
Last edited:
... I feel like i"m being attacked here.

stop guys, yes, I know I made a mistake paying the same price for a 3070 8GB that I could get a 7900XTX today.

In fairness, this is pretty much the only game where the VRAM is going to cause such severe issues. And that may yet be patched up (as it has been in other games that suffered similar issues over the last few months). Also, from what I've seen, there's very little difference between High and Ultra textures so dropping that setting at least should give a decent and relatively cheap performance boost.
 
Yes but Ada gets by on far less bandwidth due mainly to its cache structure. Look at the 4090 outperforming the 3090Ti by 60% or more on the same bandwidth. While the 4080 obliterates the 3090 at all resolutions with much less bandwidth.

I might have missed something but everything I've seen to date outside of TLOU with its well documented high VRAM requirements has seen the 4070Ti maintaining its relative performance position vis a vis the 3090 and 3090Ti at all resolutions with no significant drop off at 4k. Certainly not to the extreme extent we're seeing here in TLOU anyway. I'd also wager that based on this, the 4070ti would see a significant performance boost at 4k simply by dropping texture quality to High. At least in the TPU benchmark run. Other areas might of course have different VRAM profiles.

I'm going to have to disagree with you there, granted the cache will help but I don't think it has enough bandwidth for 4k.

Taken from the techpowerup 4070ti review:

You've probably noticed it while looking at the charts: RTX 4070 Ti loses quite some performance as the resolution is increased—more so than other cards in the same performance tier. For example, at 1080p, the 4070 Ti is 4% faster than the RTX 3090 Ti, at 1440p both cards are evenly matched, and at 4K the 3090 Ti is 10% faster. That's a surprisingly big range; things are no different when compared to the RTX 7900 XT: -4% at 1080p, -8% at 1440p, and -10% at 4K—same 13-14% delta. It's definitely not an architectural problem, because we're seeing the same trend against the RTX 4080, which is based on Ada, too: -14% at 1080p, -19% at 1440p and -26% at 4K. To me it seems that the underlying reason for this behavior is that the AD104 GPU has a ton of shading power, but becomes limited by cache size and memory interface at higher, more memory-intensive resolutions.

And

The end result is that NVIDIA is clearly the better choice if you're betting on ray tracing, but the differences aren't exactly huge for the RTX 4070 Ti. It seems that due to the cache/memory configuration, the card sees a bigger performance hit from enabling ray tracing than other GeForce 40 cards, especially at 4K. That's not to say that RTX 4070 Ti is bad at ray tracing—it still is one of the fastest cards in that scenario, but the performance scaling is something you should be aware of.
 
... I feel like i"m being attacked here.

stop guys, yes, I know I made a mistake paying the same price for a 3070 8GB that I could get a 7900XTX today.

8GB VRAM was the main reason I ditched my 3060ti a few months ago as I was getting right up to the VRAM limit at 1440p in quite a few games.
 
In fairness, this is pretty much the only game where the VRAM is going to cause such severe issues. And that may yet be patched up (as it has been in other games that suffered similar issues over the last few months). Also, from what I've seen, there's very little difference between High and Ultra textures so dropping that setting at least should give a decent and relatively cheap performance boost.
It’s true. It’s only ports so far, multiplatform titles have yet to exhibit this issue; this is likely to change as we drop off last gen however.
 
Last edited:
8GB VRAM was the main reason I ditched my 3060ti a few months ago as I was getting right up to the VRAM limit at 1440p in quite a few games.
I was so desperate for a GPU during covid hanging onto my 1070 as long as I had. I regret not waiting a little longer.
 
I was so desperate for a GPU during covid hanging onto my 1070 as long as I had. I regret not waiting a little longer.

If you don't care about ray tracing and don't have an issue with using an AMD GPU then you should be able to sell it and move to 6800XT for next to no extra investment your end.
 
If you don't care about ray tracing and don't have an issue with using an AMD GPU then you should be able to sell it and move to 6800XT for next to no extra investment your end.
I'll consider it. It really comes down to how limited I feel here. Right now, I'm okay, but as more games from PS5 come to PC and this becomes a serious bottleneck, I'll make the consideration. Or just wait for the latest and greatest.
 
Or just don't bother playing those titles... 🤷‍♂️
yea, I had both consoles. Then I decided everything was coming to PC anyway, so I'd just return to my humble PC origins. I do have limited time to play, so whatever makes sense.
There's no rush to get something now. I can get away with 1080p DLSS for a while I suspect.

I didn't get to finish TLOU2. And I want to play HZD2 and GOW2. I enjoyed the first ones, so I'd like to continue on those 3. Everything else, I'm not in a rush.
I've spent the last few months playing significantly more D&D and boardgames than I have video games. So you are right there is no real rush, I spend more time on this forum than playing games =P
 
So is the "real time reflections" setting just really good looking cubemaps here and there? I got to the area where the companion picture of the setting was taken, and the difference between off and low, and then low to high is pretty big. Can't think of anywhere else in the game where I've seen cubemaps that looked so good. They mostly looked ok, to the point I would've preferred that they were fully SSR, even if low res.

lCEUGWW.jpg


0e3ANlF.jpg


QhOyBmj.jpg


V3VSRd1.jpg


oyl3qy7.jpg
 
... I feel like i"m being attacked here.

stop guys, yes, I know I made a mistake paying the same price for a 3070 8GB that I could get a 7900XTX today.

edit: /s I don't actually feel attacked. But yes, I shouldn't have bought a 8GB 3070

Yup I went with the Radeon 6800 (not the XT) as it was both cheaper than the 3070 and had double the RAM (16 GB).

Once I saw games were starting to be really hampered on 8 GB cards, the 3070 was no longer anything I'd be willing to pay even 100 USD for.

Regards,
SB
 
may also helps to make the game less crashy
  • lower GPU core and memory clock (or just lower the max power budget)
it seems for some unknown reason, for some people/hardware. the game easily makes gpu crash.

may also want to lower CPU clock (or lower the boosting or put lower temperature limit). but mainly the concern is on GPU. Despite TLOU generally almost never maxed GPU usage.
 
I'm going to have to disagree with you there, granted the cache will help but I don't think it has enough bandwidth for 4k.

Taken from the techpowerup 4070ti review:



And

Yes sorry you're right. I had forgot it scales like this as ever since the initial reviews I've only ever focussed on 4K performance and just seen the 4070Ti as generally a bit slower than the 3090Ti. This is actually good news for me as my native res is a bit less thank 4K :)

That said though, I still believe the scaling at 4K in TLOU is influenced by the VRAM limitation to a significant degree. In that benchmark it doesn't just drop below the the 3090Ti by a few percent, it's performance falls off a cliff to the tune of 11% slower than the older 3090 and a full 21% slower than the 3090Ti - much higher than the TPU average.

Even more telling is its performance relative to the 16GB 6900XT which is basically a wash whereas it should be around 11% faster than that GPU according tot he TPU average. When we couple that with the obvious VRAM based performance limitation at play on the 3070 (slower than the 3060) and the chart at the top showing 14GB VRAM usage at 4K, I think it's an inescapable conclusion that VRAM limits are hitting the 4070Ti here at 4K. Not that I think it's justified for this game and is hopefully something that will be resolved through patches. In fact the game has already got a couple of patches which seeming address VRAM limitations to a degree which I assume were not accounted for int he TPU review.
 
Yes sorry you're right. I had forgot it scales like this as ever since the initial reviews I've only ever focussed on 4K performance and just seen the 4070Ti as generally a bit slower than the 3090Ti. This is actually good news for me as my native res is a bit less thank 4K :)

That said though, I still believe the scaling at 4K in TLOU is influenced by the VRAM limitation to a significant degree. In that benchmark it doesn't just drop below the the 3090Ti by a few percent, it's performance falls off a cliff to the tune of 11% slower than the older 3090 and a full 21% slower than the 3090Ti - much higher than the TPU average.

Even more telling is its performance relative to the 16GB 6900XT which is basically a wash whereas it should be around 11% faster than that GPU according tot he TPU average. When we couple that with the obvious VRAM based performance limitation at play on the 3070 (slower than the 3060) and the chart at the top showing 14GB VRAM usage at 4K, I think it's an inescapable conclusion that VRAM limits are hitting the 4070Ti here at 4K. Not that I think it's justified for this game and is hopefully something that will be resolved through patches. In fact the game has already got a couple of patches which seeming address VRAM limitations to a degree which I assume were not accounted for int he TPU review.

This discussion we're having is making me think twice about going to 4k with my 4070ti.

The 4k monitor I ordered and arrived in the state it did (see attached) so maybe that's fate telling me to stick to 1440p.
 

Attachments

  • IMG20230331192444.jpg
    IMG20230331192444.jpg
    2.9 MB · Views: 20
It looks like they issue drawcalls for the UI despite it being invisible

1.jpg

Not the biggest issue by any means, but more wasted GPU time :p

On some platforms this can waste significant amounts of bandwidth though. I suspect this is not unique to the PC port, but they do the same on console too.
 
Here's a concept run at 4K/DLSS performance tweaked settings on 16 GB RAM 8 GB VRAM on one of the hardest sections of the game to run on the 3070.


Recording also affects this. Recording takes about 400 500 MB VRAM, without it, game uses around 7.5 7.6 GB dedicated VRAM and works smoother.

1440p is a cakewalk regardless, 4K DLSS performance has a bigger VRAM impact than native 1440p actually so please don't come at me saying I'm playing at 1080p. With all my respects.
 
Back
Top