Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
It's neck-and-neck with a 3070 and outdoes it in scenarios with high VRAM requirements.

Indeed. In some - admittedly outlier - cases it straight slaughters all the 8GB cards.


doom-eternal-rt-3840-2160.png


far-cry-6-rt-3840-2160.png


While this isn't currently the normal state of affairs, it does suggest that if vram creep continues (as it tends to do as a generation progresses), the 2080Ti may find itself better able to handle memory demanding games than newer 8GB cards, even at resolutions well below 4K.

I expect 1440p games, and perhaps even 1080p games with RT, to be more frequently hitting the limits of 8GB cards as time goes on. Especially once developers and driver providers stop giving a shit about gracefully managing 8GB cards.

If you were able to afford a 2080Ti at launch and you're the kind of person who hodls a card you were onto a winner.
 
Yeah, the 2080 Ti is still an above-average GPU even now.
I don't disagree. But even the fastest GPU is useless if it runs out of memory. I can't stress enough how much I didn't want an 8GB card knowing that a new console gen was coming out.

I expect 1440p games, and perhaps even 1080p games with RT, to be more frequently hitting the limits of 8GB cards as time goes on. Especially once developers and driver providers stop giving a shit about gracefully managing 8GB cards.
Xbox Series S might be what saves those 8GB cards. It's not entirely without irony.
 
Here is an example from Miles Morales between 1080p and very high textures and 1440p with high:
1080p:
1440p:

Image quality in 1440p is much better while at the same time the game needs 900mb less vram. Very high in 1080p doesnt really provide any meaningful image quality improvement for ~1,5gb more memory.
 
I don't disagree. But even the fastest GPU is useless if it runs out of memory. I can't stress enough how much I didn't want an 8GB card knowing that a new console gen was coming out.


Xbox Series S might be what saves those 8GB cards. It's not entirely without irony.
It may not.
There are massive fundamental differences the consoles and PCs that it's just going to take a lot of time to sort out on the PC side, and really we're talking about the memory system, we don't even need to talk about fancy optimization and cache scrubbers and shit. The HUMA on console, combined with nvme and decompressors puts PC in a weird spot for a long time, not to mention compiled shaders, that software solutions need to be figured out on PC for these things to work. Previously, this was a non issue because XBO and PS4 came in relatively weak compared to what was on PC at the time. But today, it's not like that, because memory both and storage and bandwidth are the lifeforce of performance. And the higher that this number goes, at least on PC, that means more saturation on your PCIE bus, that doesn't exist on console. And consoles are getting pretty high with their bandwidth and storage numbers, so naturally while PC components can crush consoles, the carriers of information is completely stressed out, and I've been thinking about that crazy amount of CPU usage with TLOU and what it could imply to the PCIE as well.

With PC, you can attempt to overpower the differences with hardware, and likely this will address most of the issues (as we see with VRAM) but I suspect there are always going to be challenges around weird hitching with PC and I don't believe it's necessarily the result of DX12. It's going to be a real growing pain, and I think as we move away from last gen, it's quite possible that this problem will become more prominent and not less.

There are lots of individual components on PC that are super powerful and will always be super powerful, but in the end they work in silos and not together. And because of that, there are bottlenecks that can exist as a result of working as individual components that consoles will bypass because console hardware works together.

This will be a painful generation for PC I suspect, ideally by the end of this generation, they get PC sorted out by next. But I think it's going to be a long time for them to architect what the new software stacks and technologies need to happen on the PC side for them to operate without all this hitching.

I have many times said, that this is just growing pains, which I think it is. But I think what's going to become clear, and fuck, I hate saying it, but there's going to be a lot of outrage like we see here as a result of looking at PC individual components being much more powerful than the console counterparts, but the challenges around hitching etc will still be there. And naturally today we blame, devs, APIs, etc, because the hardware is so powerful, but what if... just what if, the bottlenecks are actually in-between those components. Because no one is talking about that.

tldr; when nothing needs to be loaded, and everything can just stay in memory both system and VRAM, PC games are flying to like 300fps or more at ultra settings. As soon as we start getting into games that are significantly larger than system and VRAM, and things need to be streamed into the game constantly, we have these issues; we have compilation issues, weird stutters etc. But we don't see these issues on consoles (except we see them on Xbox likely because so much code is shared with PC) That to me is the transfer of memory problem, because when memory doesn't arrive after you request it, you're stalled out.

There's such a big difference between Forspoken, Spider Man, and even AC Valhalla games from Counter Strike, and Fortnite, Doom, COD, and even Halo Infinite MP, though campaign is a different animal. There's a difference there. We need to keep our eyes peeled on how developers are addressing open world games with huge asset libraries, the amount of dynamism in shaders to accommodate more flexibility in gameplay, that's could very well become a factor here for hitching. And it's going to get increasingly harder to deal with so many different PC vendors and components that were not designed to work together.
 
Last edited:
I've spent some time with the game now and I'm not sure what the grand mystery is. The game is VRAM intensive and that's basically all there is to it.

I'm playing on a 9900k + 2080 Ti at 1440p and it performs perfectly well even on this old bucket.

A GPU with an MSRP of $1000 introduced in late 2018 is not exactly an 'old bucket' - it was considered a 'boutique' GPU at the time and its relative performance reflected in more modern GPU's is still considered high-end (3070 level).

Here's a 2080Ti struggling to match the PS5, even in a relatively lax scene to render. If anyone came in here after the PS5's launch and claimed it was not only in the ballpark of a 2080ti, let alone superior, they would have been called nuts.

1680400855927.png

If it was largely just an issue of VRAM, my 3060 wouldn't struggle to maintain 60fps at 1080p, even with a mix of high/medium (and I'm excluding sections which were CPU limited). It's clearly not just VRAM, it has both outsized rendering and CPU requirements, well in excess of the ordinary level of bloat you get from most ports.

Yes, VRAM limitation is a part of it. But it's far more than that.
 
Last edited:
A GPU with an MSRP of $1000 introduced in late 2018 is not exactly an 'old bucket' - it was considered a 'boutique' GPU at the time and its relative performance reflected in more modern GPU's is still considered high-end (3070 level).

Here's a 2080Ti struggling to match the PS5, even in a relatively lax scene to render. If anyone came in here after the PS5's launch and claimed in was not only in the ballpark of a 2080ti, let alone superior, they would have been called nuts.

View attachment 8619

If it was largely just an issue of VRAM, my 3060 wouldn't struggle to maintain 60fps at 1080p, even with a mix of high/medium (and I'm excluding sections which were CPU limited). It's clearly not just VRAM, it has both outsized rendering and CPU requirements, well in excess of the ordinary level of bloat you get from most ports.

Yes, VRAM limitation is a part of it. But it's far more than that.
Directstorage and GPU decompression will help with some of this, and we can measure things then.
But as it stands for PC and TLOU:
Assets get sent to memory from drive
Assets decompressed by CPU
Assets then copied to GPU

You've hit PCIE twice for a single asset and need decompression. As scene complexity balloons and more data is required to move in and out of VRAM, we're just going to see more CPU usage and more PCIE usage.
 
Yes this is indeed one of the more obvious ways that this game could have been better tailored to the PC (if time and budget had allowed) rather than shoehorning in the PS5 optimised solution.
Based on what I've seen, the texture quality in TLOU is worse on 8GB cards now than games a year or two ago on 4GB cards. So yeah, I think it's safe to say it could have been done better.
 
Directstorage and GPU decompression will help with some of this, and we can measure things then.
But as it stands for PC and TLOU:
Assets get sent to memory from drive
Assets decompressed by CPU
Assets then copied to GPU

You've hit PCIE twice for a single asset and need decompression. As scene complexity balloons and more data is required to move in and out of VRAM, we're just going to see more CPU usage and more PCIE usage.

I have no idea if the high CPU usage is due to the constant transfers across the PCI-E bus or more to do with the CPU requirements of texture decompression, but I do agree with your general thesis though that outside of shader compilation, the biggest barrier to PC ports being shipped in a quality state right now is the management necessary of these separate ram banks isolated by a (relatively) slow bus.

For this reason I've always pushed back when I see comments along the lines that "Consoles are just PC's now", merely because they use X86 CPU's and have GPU's designed by AMD. No! The variation in hardware is one thing, but the difference between a UMA and what your get on the PC is very significant in how you approach things, lord knows I've heard developers state as much time and time again.

On that front:

DirectX 12 Update Allows CPU and GPU to Access VRAM Simultaneously

Toms Hardware said:
Microsoft has announced a new DirectX12 GPU optimization feature in conjunction with Resizable-BAR, called GPU Upload Heaps that allows the CPU to have direct, simultaneous access to GPU memory.

The piece goes on make some assumptions that I don't think are warranted given what little me know about it atm, but as @Dictator mentioned on a recent DF Direct, if we're not going to go with a full UMA architecture on the PC anytime soon, what we will see is at least more work being done to ease that chasm from the developers perspective on accessing these separate islands.
 
Last edited:
Yeah, this game is stupidly demanding compared to what the PS5 can achieve and I really wonder why. I fully expect the PS5 to outperform an equivalent PC. Hell, in Uncharted 4, it performs like a 3070/2080 Ti but this game has it perform like a 6800 XT which is a bit crazy given the fact that the 6800 XT is typically 80-90% faster.
 
A GPU with an MSRP of $1000 introduced in late 2018 is not exactly an 'old bucket' - it was considered a 'boutique' GPU at the time and its relative performance reflected in more modern GPU's is still considered high-end (3070 level).

Here's a 2080Ti struggling to match the PS5, even in a relatively lax scene to render. If anyone came in here after the PS5's launch and claimed in was not only in the ballpark of a 2080ti, let alone superior, they would have been called nuts.

View attachment 8619

If it was largely just an issue of VRAM, my 3060 wouldn't struggle to maintain 60fps at 1080p, even with a mix of high/medium (and I'm excluding sections which were CPU limited). It's clearly not just VRAM, it has both outsized rendering and CPU requirements, well in excess of the ordinary level of bloat you get from most ports.

Yes, VRAM limitation is a part of it. But it's far more than that.
I can't really comment on that since I don't have one. But you need to consider that this engine was very explicitly created for AMD GPUs running on a wholly different graphics API. Comparing different architectures like that is never a good idea and saying that PS5 is equal to whichever nvidia GPU is never going to be accurate. I don't have a 3060 though so I can't really say anything other than the specs falling well short of PS5.

That said, I'll still call my system an old bucket. Console generations typically last 7 years and my system is almost 5. I also found this YouTube video of the game running on a 1080 Ti and I think the numbers say a lot. VRAM is by far the biggest confound in this game. Ultra settings might be more compute heavy but the high settings definitely have a memory toll.

 
Last edited:
A GPU with an MSRP of $1000 introduced in late 2018 is not exactly an 'old bucket' - it was considered a 'boutique' GPU at the time and its relative performance reflected in more modern GPU's is still considered high-end (3070 level).

Here's a 2080Ti struggling to match the PS5, even in a relatively lax scene to render. If anyone came in here after the PS5's launch and claimed in was not only in the ballpark of a 2080ti, let alone superior, they would have been called nuts.

View attachment 8619

If it was largely just an issue of VRAM, my 3060 wouldn't struggle to maintain 60fps at 1080p, even with a mix of high/medium (and I'm excluding sections which were CPU limited). It's clearly not just VRAM, it has both outsized rendering and CPU requirements, well in excess of the ordinary level of bloat you get from most ports.

Yes, VRAM limitation is a part of it. But it's far more than that.
Game has joke optimization.. 2080ti can't even do 120hz at 1080p lol... naughty dog games are bad on pc
 
I can't really comment on that since I don't have one. But you need to consider that this engine was very explicitly created for AMD GPUs running on a wholly different graphics API. Comparing different architectures like that is never a good idea and saying that PS5 is equal to whichever nvidia GPU is never going to be accurate. I don't have a 3060 though so I can't really say anything other than the specs falling well short of PS5.

That said, I'll still call my system an old bucket. Console generations typically last 7 years and my system is almost 5. I also found this YouTube video of the game running on a 1080 Ti and I think the numbers say a lot. VRAM is by far the biggest confound in this game. Ultra settings might be more compute heavy but the high settings definitely have a memory toll.

PS5 is performing well over 50% faster than a 1080ti which is way out of spec, even accounting for Pascal not being a great architecture.
 
But you need to consider that this engine was very explicitly created for AMD GPUs running on a wholly different graphics API.

Well aware, we've been talking about that for the last several pages. I'm just responding to your claim that the VRAM limitation is "all there is to it", I'm simply saying yes, of course that's a major factor, but this game also has rendering/cpu demands unlike any other port from the PS5 to PC.

Comparing different architectures like that is never a good idea and saying that PS5 is equal to whichever nvidia GPU is never going to be accurate.

We can absolutely make some guesstimates though based on the wide range of games that have been released on both platforms, the architecture does not have to be identical. It's been over 2 years after all, we have quite a selection. The fact of the matter is the only other game to comes close to requiring a 2080ti to be slightly better - let alone not even match the PS5 in rendering load (!), is perhaps Uncharted, which coincidentally runs on the same engine - and doesn't have near the same vram requirements.

If we want to restrict the comparison to similar architectures, fine - here's the RDNA2 6800xt, 1440p Ultra. Now Ultra has a ~20% hit over High so let's say at high these framerates could be in the ~75fps range here then. I can also tell you though that this opening area is certainly not the most stressful in the game - and also note the PS5 does not top out at 60 either, it can go into the high 70's in indoor areas too. So vram isn't a factor here, this means that a 6800xt - a card that is based on the same core RDNA2 archicture as the PS5, but has twice the compute units - and is reflected as such in 99% of other games - is only running slightly faster than it. That is indeed a 'mystery'.

1680416241508.png

It's an extremely outsized GPU load compared to every other port, that's just the reality of it. Like others have said it may be 'justified' in how it had to be ported economically without a major engine rewrite, nobody is saying there is something nefarious going on, we just don't know how/why the engine works this way on PC GPU's. But it is well out of the norm in what hardware grunt it requires, it's not just vram as the bottleneck.

That said, I'll still call my system an old bucket. Console generations typically last 7 years and my system is almost 5.

The age of your GPU is only relevant insofar to what product tier it compares to now, the entire point of shelling out the big bucks for the top-end products is their supposed longevity. The 2080ti was the top-end card when it was released, back in the day where $1k for a GPU was considered excessive. With the slowing of relative performance gains over the years, your 2080ti's performance class today translates into a ~$600 product (3070). It's one thing to have a 5-year old midrange card that is now equivalent so something like a 2060 super, a game running well on such a 'bucket' would indeed be impressive - but your game is running the equivalent of midrange ($600) card and performing worse than a $400-$500 console. That is indeed quite out of the norm.
 
This is no different to what we had in 2013 when PS4 and Xbone released as even high end GPU's didn't have as much VRAM as they did.
I agree, but I don't remember playing Ryze Son of Rome or Titanfall on my GTX 660Ti (3GB) and experiencing any VRAM problem, texture setting were pretty scalable visually back then. It never was this extreme.
 
This is no different to what we had in 2013 when PS4 and Xbone released as even high end GPU's didn't have as much VRAM as they did.

People have just forgotten.
I think there is a difference to back in 2014-2015: the quality of texture options available now in 2023 is worse. 2 GB and 3 GB GPUs back then still had decent texture quality in that time frame.

The textures in TLOU Pt 1 turn to sub-ps4 quality if you have 8 GB of VRAM.
body.00_47_36_55.stil17eoi.png


That is the quality of textures they can achieve with 8 GB Vram. The GPUs we have in 2023 are far more capable than ever before of smartly using VRAM to increase quality (DX12, Partially resident texture/virtual texturing/ sample feedback), yet in spite of all that better programmability ND have made a game have textures worse than consoles released in 2013.
 
The textures in TLOU Pt 1 turn to sub-ps4 quality if you have 8 GB of VRAM.

And there Alex is the issue with TLOU on PC.

It's not it's VRAM requirements, it's that the low and medium options have really bad textures thus forcing people to go with a setting their GPU can't handle to get a decent looking game.

I suppose the question is why are the textures that bad at lower settings?

As the game was made for PS5 did the textures required to scale decently below what PS5 offers simply not exist and this soupy mess is what they thought was OK?

If the game had released with good looking textures on low and medium would the outrage at how it performs still been as bad?

Is this also a sneak peak at your comparison video 👀
 
Last edited:
Status
Not open for further replies.
Back
Top