Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
Nah they trade blows but on balance the PS5 GPU could be argued to be ahead on paper. Compared with the PS5 and using it's official boost clock as the basis it has:

77% of PS5's pixel fill rate
98% of PS5's compute and texture throughput
15% more geometry throughput
Equal memory bandwidth

Given the PS5'smemory bandwidth is shared with the CPU, and the 2080 can often run above it's rated boost clock, I guess we can see how the PS5 rarely seems to match the 2080's real world performance. But that doesn't really leave much room for console optimisations giving a relative performance boost.
You’re comparing the max clock of PS5 to the minimum clock of a 2080. Real world clocks are going to put a 2080 above 11 TFLOPs and the PS5 likely below 10. The bandwidth contention as you mentioned is also big. Nvidia GPUs also use their bandwidth more efficiently than pre Infinity Cache AMD GPUs.
 
Last edited:
We should also take the super tiny VRAM buffer into account.


Remember how having ultra textures (ps5 equivalent)in spiderman tanked 3070's performance by almost %50 at 4K with PS5-matched settings.

These 8 GB GPUs lose varying amounts of performance the more they spill data to regular RAM. It is also dynamic, not fixed. It is possible that most 8 GB cards always lose a bit of performance due to spillage, with varying amounts. IT is really hard to compare 8 GB GPUs to PS5 at this point.

I think it is high time we stopped using 2080 as a metric, and use 6700xt instead and see how performace fares over there. VRAM limitations makes that comparison moot. There will be points where 3070/2070/2080 and co. will tank to nearly half to their performance.
 
It's fine to be excited about your favorite console but come on.

IMG_1793.png
 
You’re comparing the max clock of PS5 to the minimum clock of a 2080. Real world clocks are going to put a 2080 above 11 TFLOPs and the PS5 likely below 10. The bandwidth contention as you mentioned is also big. Nvidia GPUs also use their bandwidth more efficiently than pre Infinity Cache AMD GPUs.
Hard to compare. PS5 ports are heavily optimized to its targeted hardware. Also, the PS5 api is lower level and more performant than DX12 as there is no need to consider compatibility across multiple platforms and configurations.

On paper the 2080 is more powerful than the PS5 GPU but software performance is ultimately a product of more than Tflops and bandwidth.
 
Last edited:
Hard to compare. PS5 ports are heavily optimized to its targeted hardware. Also, the PS5 api is lower level and more performant than DX12 as there is no need to consider compatibility across multiple platforms and configurations.

On paper the 2080 is more powerful than the PS5 GPU but software performance is ultimately a product of more than Tflops and bandwidth.
I’m comparing the 2 on a purely hardware basis.
 
Yea there's definitely some spectrum involved here, and you're right to look at my choice of words under a lens.

I think where I want to go with this is that, if all games are going multiplatform on launch from now on, consoles that used to have: low cost, ease of use, as well as being able to 'punch above it's weight'; is going to eventually fall back to low cost and ease of use.

Without those hyper-focus level optimizations, the punching above their weight is just not going to happen.
I don't think that makes much sense. Even multiplatform games now are still considered console ports to PC. The people responsible for console development in Sony WWS will still be there. It's just that PC oriented coders will also be there for porting duties
 
Downloaded TLOU:RM just to see how it could perform on my system (i5-12400, 16GB, 3060) and uh...yikes. I don't know man, but from my experience with game patches, getting this into an acceptable state on hardware like mine, which I don't think is that out of the ordinary for a lot of gamers, is going to require some level of unparalleled performance improvements - these would need to be some of the most transformative game patches ever released. It's one thing to add shader compiling, fixing crashes and maybe reduce a little stuttering though updates, but I really doubt we're going to see anything substantially improve loading/rendering performance. Hope I'm just pessimistic.

I can definitely see with a beefier CPU and an Ada-level card how spectacular the game could look and run, and hell especially if they add DLSS3, no doubt for a certain class of hardware this would be a hell of an experience - but there's a massive gulf between that and the settings I can run at.

Soooo....
  • Both GPU and CPU bottlenecked on my system at 1440p using DLSS/FSR Performance, preset High - but even at Medium there were a few edge cases where I still was still limited by my GPU. While I haven't gotten that far (and don't plan to as this is just to check out the status of it technically), the CPU/GPU load can vary quite a bit depending upon the scene, so I really wouldn't take a benchmark that focuses just on the opening area in the quarantined zone as truly indicative of the performance for the game as a whole. I've had frame rates in the 50's with 70% GPU load but 99% CPU (shaders all precompiled), and 70% CPU but 99% GPU. Didn't test it extensively but based on my results with DLSS, odds are you will not be able to maintain 1080p native 60 without reconstruction on a 3060.
  • Several instances of "LOADING" as the action comes to a halt in the middle of a level and you're waiting 10+ seconds.
  • Even when not limited by CPU/GPU, occasional small stutters.
  • Prolonged stutters due to massive CPU spikes when traversing, presumably to decompressing textures...?
  • Somewhat rare, but noticeable shadow banding at points which I don't think exist in the PS5 version.
Yeah not good, but not too surprising based on other reports. I remember when this was first announced and I thought "Hmm I'll definitely hold out for the PC version, I may be able to run console settings at 4K with DLSS performance for 60fps" - lol.

That being said, the PS5 version's TAA is quite blurry, so even with requiring DLSS at 1440p, there was a chance in terms of overall visual quality it would still hold up, if not be superior. At first glance when I started with 1440p DLSS Quality, that looked to the case, it was definitely sharper and while there was the odd moiré pattern when the DLSS resolve was struggling with the odd floor grating, but overall in the daylight areas it was very solid. As I eventually had to reduce the DLSS load from balanced to performance to try to maintain 60fps when we're getting outside the city in the dark with rain, that naturally reduced the quality, but still relatively OK.

However what I really wanted to test was my old nemesis with reconstruction, and that is when it has to deal with lower-res post-process effects. Some handle it decently, but quite a few don't. I was especially curious as when I used DLSS it's almost exclusively at 4k and can still notice these artifacts when they occur, so was really interested to see how it dealt with these cases starting from 1440p, and with a game that relies heavily on post process effects to boot.

And...welp (and note that's using DLSS balanced, which I couldn't maintain). Unfortunately using the flashlight is not exactly a rare practice in this game, and there were more egregious incidents of it in combination with certain materials behaving oddly outside of what I captured in that video, such as bathroom tile (where FSR in particular struggled to an almost comical level). Note that is on High, and you have control over the resolution of many of these effects, but even at Ultra these artifacts were still pretty prominent. The thing is to keep my FPS even close to 60 at 1440p/performance I could drop to medium, but then that would just make these reconstruction artifacts even worse.

So yeah, measured 'on the whole' vs every scene up until that point, I guess you could say DLSS at resolutions like 1440p in this game is 'good' to 'ok', but when you run into these incidents which can occur quite frequently in some environments, it really takes you out it. There's the possibility these can be improved with patches too, albeit Uncharted had similar artifacts that to the best of my knowledge that were never patched either.

For my money you would need DLSS Quality, High preset min to just compare to the PS5's performance mode (which while 'soft' is at least extremely stable) - at least when you take into account actually running the game beyond taking a few comparative still screenshots in the daytime. That level of performance means we're in 3070, maybe even 3080 territory - not to mention requiring a 12700k likely to deal with CPU bottleneck. That's nuts.
 
I don't think that makes much sense. Even multiplatform games now are still considered console ports to PC. The people responsible for console development in Sony WWS will still be there. It's just that PC oriented coders will also be there for porting duties
If games are releasing day 1 on PC and console, it’s a single code base. There’s no porting.

Porting by definition is taking something that is entirely designed for 1 platform and moving it to another. A multi platform title is not a console port to PC released on the same day. It’s a single code base, that all works the same across all the platforms supported. It’s the only way it’s fair; otherwise you run into some weird issues if it’s the latter. It’s the reason why multiplayer games are always multiplatform and not ports. Ports would make cross platform play very challenging to balance between patches.
 
There’s been a lot of discussion regarding TLOU vram/ram usage on pc. I’ve seen videos from zWormGaming and Daniel Owen testing lots of GPUs and I gotta say, there’s something a bit off with the texture scaling. Looking at the low and medium textures, they are quite poor. I wonder if there’s just a lot of texture variety in this game? The seem to be using oodle on pc and I wonder if there’s any texture decompression going on that might be causing issues with scaling?

With regards to 8gb gpus, I think this is long overdue. For the last 7ish years, vram has been very stagnant. I still can’t believe reviewers misled people to buy the 3070, 3060ti, and 3070ti despite the 8gb of vram. We were on the verge of a new console gen and every console gen sees a stark increase in hardware requirements for pc. Very few reviewers called out Nvidia’s forced obsolescence with the paltry amount of vram on these cards. Even on Reddit, many were warning of the perils of 8gb cards. Only the 3060, 3080 12gb, 3080ti, and 3090 had enough vram. Now a lot of users are feeling buyers remorse as they should because they were misled. People spent lots of money on their cards while reviewers got it for free so I guess no issue for them. Imagine spending $500 on a 3070 with 8gb in 2021 or 2022 and having your card start to struggle in games due to vram. There’s a lot of anger on the web over this and I think this game is the straw that broke the camels back.
 
With regards to 8gb gpus, I think this is long overdue. For the last 7ish years, vram has been very stagnant. I still can’t believe reviewers misled people to buy the 3070, 3060ti, and 3070ti despite the 8gb of vram.

I think it was somewhat shortsighted perhaps to think 8gb would remain adequate, but it's not like there were many options in those price ranges at the time. There was no real DLSS competitor from AMD, their DX11 drivers still sucked, and of course RTX - recommending a Radeon at that time just due to their higher vram for the mere possibility that in ~2 years future console ports will overflow 8gb would be ridiculous.

No one really was 'misled', there just weren't competitors in that price bracket that didn't also come with large compromises themselves. You could say "8GB is going to be a limitation", but if it wasn't really being shown in games at the time and there were many other advantages to Ampere's architecture, what would you recommend?
 
Speaking of VRAM btw, wonder if we'll see Samsung's GDDR6W be implemented in the next gen of cards - seems a way to increase capacity+bandwidth without having to increase GPU bus width (a least, as far as I understand it). Cost of course - who knows.

Samsung has today revealed their new GDDR6W memory type, a new kind of GDDR6 memory that utilised Samsung's Fan-Out Wafer-Level Packaging (FOWLP) technology. Samsung calls this new memory type their "next-generation graphics DRAM technology", as it doubles the capacity and bandwidth that GDDR6 memory can offer, mostly because it is effectively two standard GDDR6 memory chips on a single package.

GDDR6X can offer users two times as much GDDR6 memory per module than standard GDDR6 modules and can offer users a 2x bandwidth increase through its doubled I/O design. With GDDR6W, I/O has been doubled from x32 to x64, and Samsung has stated that their current GDDR6W modules can deliver per pin bandwidths of 22 Gbps. That's a lot of bandwidth.
 
I think it was somewhat shortsighted perhaps to think 8gb would remain adequate, but it's not like there were many options in those price ranges at the time. There was no real DLSS competitor from AMD, their DX11 drivers still sucked, and of course RTX - recommending a Radeon at that time just due to their higher vram for the mere possibility that in ~2 years future console ports will overflow 8gb would be ridiculous.

No one really was 'misled', there just weren't competitors in that price bracket that didn't also come with large compromises themselves. You could say "8GB is going to be a limitation", but if it wasn't really being shown in games at the time and there were many other advantages to Ampere's architecture, what would you recommend?
That’s a fair take but, this vram issue will get worse not better. The problem I have with reviewers is if lay folk could see the writing on the wall, how couldn’t they. The 3000 series launched in 2020 just before the start of a new console generation which is coincidentally the worst time to upgrade your pc. Even on Reddit, people were warning of the perils of 8gb because we already knew consoles had 16gb of ram. I think if there was a heavy asterisk by the recommendations warning people that the card’s longevity prospects are poor, people wouldn’t be so upset. There’s a strong sentiment online that Nvidia really screwed people with the forced obsolescence. Imagine paying $500 for a 3070 in the last 2 years and seeing their card struggle like this. It’s even worse because in this high inflationary environment, they can’t upgrade because money is tight.

I remember when I bought my 3080 10gb in 2021. I already knew it was only good for 1 generation because the vram was a joke. I could already trigger vram overflows by installing enough mods in certain games. Once the 4090 came out, I upgraded. Anyway, there’s a lot of anger in the community right now and I think this game is the straw that broke the camel’s back. Again reviewers get the cards for free so it’s no skin off their back.
 
Last edited:
That’s a fair take but, this vram issue will get worse not better.

I think the ball has dropped by this point though. Any story I see on the potential 4070/4060ti is paired with a comment on its 8GB 'limitation'. The 4050 being shipped with supposedly 6Gb is being met with laughter. I can't see any upcoming 4060ti review that is not going to draw attention to its vram limitation.
 
I think the ball has dropped by this point though. Any story I see on the potential 4070/4060ti is paired with a comment on its 8GB 'limitation'. The 4050 being shipped with supposedly 6Gb is being met with laughter. I can't see any upcoming 4060ti review that is not going to draw attention to its vram limitation.
It's even funnier with 4060ti will be hugely more capable than a 3070/3060ti actually. Could even outpace a 3080 with SER and new NVIDIA stuff (opacity bla bla, micromesh bla bla. stuff that boosts ray tracing performance I guess). It would even be capable of 1440p ray tracing. Heck, even my 3070 is capable of 45+ FPS ray tracing gameplay in most games at 1440p/4K DLSS conditions. So 4060ti will be further capable. Its just a waste of sillicon at this point. It would be a monstrous card with 12 GB VRAM for 1440p/optimized settings.

3070 over 6700xt had valid excuses for games before 2022 where you could run ray tracing with decent textures. Now however, that argument is gone. So it will also be gone for 4060ti. But yeah... People still gonna buy it, ain't they. We need to raise awareness somehow. Dunno!

4070 will pack 12 GB VRAM btw.
 
The german sites usually do a more thorough playthrough of games and which shows up the limitations of VRAM.

pcgh also did a special review of 3070 at launch focussing on its 8GB buffer, there were already couple of games having issues at 1440p.

 
I don't think Last of Us is a good measure for VRAM usage at all. The game clearly has all sorts of issues that are a result of it's porting process rather than hardware capability.

8GB has of course been tight since the start of this generation, and there have been quite a few examples of games that could push you over that limit at especially high settings. In those instances though, some modest paring back of settings (generally still above anything the consoles were offering) would resolve the situation.

It's only very recently we've seen games that are hitting the 8GB so hard that settings have to be pulled back to unreasonable levels (for the GPU's core capability, and often below console settings)) to get that performance back. However, to my knowledge, most, if not all of those instances recently were patched to resolve them (Spiderman, Forsaken, possibly Dead Space?). So in terms of actual games we have right now that essentially cripple mid-high end 8GB GPU's, do we have anything other than TLOU? And that certainly needs to be given more time to cook before proper judgements are made.
 
You’re comparing the max clock of PS5 to the minimum clock of a 2080. Real world clocks are going to put a 2080 above 11 TFLOPs and the PS5 likely below 10. The bandwidth contention as you mentioned is also big. Nvidia GPUs also use their bandwidth more efficiently than pre Infinity Cache AMD GPUs.

I thought the general consensus was the PS5 would be running at or near it's max GPU clock almost all of the time. And yes the 2080 will boost higher than it's rated, but not to a crazy degree. Based on this random review I googled you might expect of average something in the region of 1825Mhz which will give you around 11TF or about 7% more compute/texturing than PS5 but still only 84% of the fill rate and equal memory bandwidth.

While the bandwidth contention is definitely a thing there, I don't think there's any evidence of Turing using it's bandwidth more effectively than RDNA1. The 5700XT for example is able to outperform the 2070 with the same 448GB/s and can often compete with the 2070S with the same bandwidth as well. RDNA 2 uses far less main VRAM bandwidth for similar or better performance but obviously makes up for that with IF, so comparisons there aren't much help.

The summary to all this though is that even taking all the above into account, the PS5 is pretty much a 2080 level GPU on paper (winning some/losing some, but only my small margins). Now compare that tot he 2070 which is often held up as the PS5 equivalent and you can see how it's far more accurate to compare it to the 2080 on paper. 2070 has only 73% of the PS5's fill rate, compute and texturing throughput while having 9% more geometry throughput and the same memory bandwidth. Even the 2070S has only 79% the fill rate and 88% the the compute/texture throughput.

So being surprised that the PS5 can trade blows with the 2080 doesn't make much sense to me. It's ore surprising that it doesn't do so very often. The 2080Ti on the other hand is obviously in another league.

Hard to compare. PS5 ports are heavily optimized to its targeted hardware. Also, the PS5 api is lower level and more performant than DX12 as there is no need to consider compatibility across multiple platforms and configurations.

On paper the 2080 is more powerful than the PS5 GPU but software performance is ultimately a product of more than Tflops and bandwidth.

That's kind of the opposite of my point. On paper they're very comparable, but in reality, we don't often see the PS5 get up to 2080 levels of performance. But when it does, everyone acts surprised.

All of the above assumes raster only btw. No RT in play.
 
I thought the general consensus was the PS5 would be running at or near it's max GPU clock almost all of the time. And yes the 2080 will boost higher than it's rated, but not to a crazy degree. Based on this random review I googled you might expect of average something in the region of 1825Mhz which will give you around 11TF or about 7% more compute/texturing than PS5 but still only 84% of the fill rate and equal memory bandwidth.

While the bandwidth contention is definitely a thing there, I don't think there's any evidence of Turing using it's bandwidth more effectively than RDNA1. The 5700XT for example is able to outperform the 2070 with the same 448GB/s and can often compete with the 2070S with the same bandwidth as well. RDNA 2 uses far less main VRAM bandwidth for similar or better performance but obviously makes up for that with IF, so comparisons there aren't much help.

The summary to all this though is that even taking all the above into account, the PS5 is pretty much a 2080 level GPU on paper (winning some/losing some, but only my small margins). Now compare that tot he 2070 which is often held up as the PS5 equivalent and you can see how it's far more accurate to compare it to the 2080 on paper. 2070 has only 73% of the PS5's fill rate, compute and texturing throughput while having 9% more geometry throughput and the same memory bandwidth. Even the 2070S has only 79% the fill rate and 88% the the compute/texture throughput.

So being surprised that the PS5 can trade blows with the 2080 doesn't make much sense to me. It's ore surprising that it doesn't do so very often. The 2080Ti on the other hand is obviously in another league.



That's kind of the opposite of my point. On paper they're very comparable, but in reality, we don't often see the PS5 get up to 2080 levels of performance. But when it does, everyone acts surprised.

All of the above assumes raster only btw. No RT in play.
Don’t rely on the Techpowerup page for specs, especially for Turing. The actual game clocks are much higher than on the sheet. The 2080 Ti for instance has a rated boost clock of 1545MHz when in actuality every single one of them goes past 1800. The better aftermarket models even easily reach 2000+.
 
Status
Not open for further replies.
Back
Top