Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
Just happened to see this again when looking at stories about the PC port recently, remember when this dropped and there was actually doubt about its veracity, at least to the extent they were possibly exaggerated?

I mean a 'successful' patch would actually get performance in line with this chart. A 3060 doesn't sustain 60fps at high/1080p currently, a 4080 certainly doesn't for 4K/Ultra - and those CPU requirements, lol is no way an I7-8700 adequate for 60fps.

1680230773615.png
 
I thought the general consensus was the PS5 would be running at or near it's max GPU clock almost all of the time. And yes the 2080 will boost higher than it's rated, but not to a crazy degree. Based on this random review I googled you might expect of average something in the region of 1825Mhz which will give you around 11TF or about 7% more compute/texturing than PS5 but still only 84% of the fill rate and equal memory bandwidth.

While the bandwidth contention is definitely a thing there, I don't think there's any evidence of Turing using it's bandwidth more effectively than RDNA1. The 5700XT for example is able to outperform the 2070 with the same 448GB/s and can often compete with the 2070S with the same bandwidth as well. RDNA 2 uses far less main VRAM bandwidth for similar or better performance but obviously makes up for that with IF, so comparisons there aren't much help.

The summary to all this though is that even taking all the above into account, the PS5 is pretty much a 2080 level GPU on paper (winning some/losing some, but only my small margins). Now compare that tot he 2070 which is often held up as the PS5 equivalent and you can see how it's far more accurate to compare it to the 2080 on paper. 2070 has only 73% of the PS5's fill rate, compute and texturing throughput while having 9% more geometry throughput and the same memory bandwidth. Even the 2070S has only 79% the fill rate and 88% the the compute/texture throughput.

So being surprised that the PS5 can trade blows with the 2080 doesn't make much sense to me. It's ore surprising that it doesn't do so very often. The 2080Ti on the other hand is obviously in another league.



That's kind of the opposite of my point. On paper they're very comparable, but in reality, we don't often see the PS5 get up to 2080 levels of performance. But when it does, everyone acts surprised.

All of the above assumes raster only btw. No RT in play.

LOL. How does one say “on paper, A is more powerful than B” then make points with real world data?

But you are right as I made that point thinking of Ti specs.
 
Last edited:
LOL. How does one say “on paper, A is more powerful than B” then make points with real world data?

But you are right as I made that point thinking of Ti specs.

My point was really that a general judgement seems to have been made that the PS5 is "around 2070 level" based on it's "specs". And then people act like it's punching above it's weight if it performs like a 2080. I'm arguing that the initial judgement is wrong, and based on the specs alone, the PS5 should be being judged against the 2080.

I think the judgement has largely come from the 2070 being seen as a 5700XT equivalent which is in turn seen as a PS5 equivalent. But in fact the PS5 is on paper 0-18% faster than the 5700XT depending on the metric and it has any RDNA2 efficiency enhancements on top of that. And further, the 5700XT tends to beat the 2070 in rasterised workloads for the most part, generally competing more in line with the 2070 Super these days.
 
I thought the general consensus was the PS5 would be running at or near its max GPU clock almost all of the time.
I never been aligned with this. How PC GPUs boost clock and how PS5 boost clocks are entirely different. One is based on power and thermals. The latter is a universal clocking mechanism based on the code that is reads to ensure that all PS5s behave identically. Because of this, they have to choose a more conservative clocking algorithm to accommodate silicon differences

TLDR; it must be loose enough to accommodate for lower bucket silicon otherwise the price points of ps5 would be outrageous.
 
My point was really that a general judgement seems to have been made that the PS5 is "around 2070 level" based on it's "specs". And then people act like it's punching above it's weight if it performs like a 2080. I'm arguing that the initial judgement is wrong, and based on the specs alone, the PS5 should be being judged against the 2080.

I think the judgement has largely come from the 2070 being seen as a 5700XT equivalent which is in turn seen as a PS5 equivalent. But in fact the PS5 is on paper 0-18% faster than the 5700XT depending on the metric and it has any RDNA2 efficiency enhancements on top of that. And further, the 5700XT tends to beat the 2070 in rasterised workloads for the most part, generally competing more in line with the 2070 Super these days.
It’s generally agreed that the PS5 can be anywhere from a 2070 to a 2080S in raster depending of the game and bottleneck.
 
Just happened to see this again when looking at stories about the PC port recently, remember when this dropped and there was actually doubt about its veracity, at least to the extent they were possibly exaggerated?

I mean a 'successful' patch would actually get performance in line with this chart. A 3060 doesn't sustain 60fps at high/1080p currently, a 4080 certainly doesn't for 4K/Ultra - and those CPU requirements, lol is no way an I7-8700 adequate for 60fps.

View attachment 8579
Huh, this chart is pretty much on point? I'm not sure what you're talking about. From all the benchmarks I've seen, the 3060 delivers a mostly 60fps experience at 1080p high settings. Do you expect them to play through the whole game with each gpu and chart the fps to make sure it doesn't drop at certain points? The same goes for the 4080 as well, it mostly hits 60 fps but it can drop. In indoor scenes, the fps ramps up and outdoor, it ramps back down. This chart suggests an average of 60fps not a locked 60fps.

As an aside, i think many 3070 & 3070ti users thought that they'd be in the performances section of this graph but, actually I think recommended is where they fall into. I just noticed this now but it's very telling that they chose the 2080ti and 6750xt. Both have 11gb of ram or greater.
 
Huh, this chart is pretty much on point? I'm not sure what you're talking about. From all the benchmarks I've seen, the 3060 delivers a mostly 60fps experience at 1080p high settings.

I'm actually running it though. The 'benchmarks' are largely taken in the daytime in the opening 20 minutes. Forget the game as a whole, they're not indicative of the game before the 2nd act.

Do you expect them to play through the whole game with each gpu and chart the fps to make sure it doesn't drop at certain points?

The parts where I'm experiencing drops are within the first 15% of the game. Like I said, it's not just GPU, hell I'd say my 3060 is usually less responsible for the drops than my CPU is. My i5-12400 is at 90%+ often, there is no way that recommended 8700 will absolutely cut it for 60fps either - my 12400 is significantly faster.

I mean yeah, rushed out videos to meet the hype during a game's release are just that, rushed. Like if I didn't actually play the game beyond the first 30 minutes I wouldn't notice how DLSS/FSR completely falls on its face often when using the flashlight either. Hopefully that's why we're in this thread, to get a slightly more methodical perspective as opposed to coverage driven by meeting clicks or marketing materials.

I expect most of the non-framerate related issues will be fixed or at least improved, but I don't expect any improvement in rendering performance. Uncharted 4 never received any either.

Yeah that's my feeling. Like it wouldn't be a disaster if that were the case, that's always the upside when you at least take care of shader compiling up front - better hardware will eventually 'solve' the problem, whereas you're permanently screwed with shader stuttering. So if they can iron out these small traversal drops, fix the crashing, and provide a decent 60fps for at least modern 6 core 12 thread CPU's, that would be 'ok'. Not great of course, still in 'wait for sale' territory, but at least if/when you upgrade the game will scale with your hardware. Ultimately a big part of this, at least in terms of loading time/CPU load, was that this was the first game to really lean heavily on hardware texture decompression that we've seen ported, so I just don't see them being able to patch out this bottleneck. If DS 1.1 was out of beta 6 months earlier maybe, but alas.

Aside from that, they really gotta fix those DLSS issues with the flashlight though man, you're constantly using it in this game. There are rarer instance of stuff like this (which coincidentally also behaved nearly identically in Uncharted), but it's stuff like this which is very common where it basically gives some surfaces a cel-shaded effect with black outlines, and you will notice it with 4k DLSS quality even. I hope it's not a case of they have to stick to the exact shader ND uses for this effect to remain consistent with the PS5 version or something, there are ways around this.
 
Last edited:
Just happened to see this again when looking at stories about the PC port recently, remember when this dropped and there was actually doubt about its veracity, at least to the extent they were possibly exaggerated?
I merely said that these requirements are basically never accurate and to not treat them as gospel. You then pushed back on this, and even tried at the time to distort what my actual claim was, when I was not saying the requirements were over or understated, just that they are inevitably not going to be accurate.

Lo and behold, I was correct, as you're even admitting to here, all while AGAIN trying to distort what the conversation actually was at the time.
 
I merely said that these requirements are basically never accurate and to not treat them as gospel. You then pushed back on this, and even tried at the time to distort what my actual claim was, when I was not saying the requirements were over or understated, just that they are inevitably not going to be accurate.
I wasn't posting this as a 'gotcha' for you actually. I'm posting it because through my experience with the actual game, I was surprised to see how much the requirements were actually undersold based on this chart, even I didn't expect that.

But hey, if you do want to revisit that thread, so be it.

It was perfectly clear what your intent was, which was made even more clear by your subsequent responses - you were pushing back on some supposed hysteria for people expressing mild concern that these specs were accurate. Your replies made no sense other than in that context, a few posts saying "Hmm these seem really high, what's up with that?" and you responding with how 'bewildered' you were that people were 'taking them seriously'. The rest of your post was expressing incredulity at how exaggerated the specs were!

This, in light of the game's CPU demands, is particularly comical now:

And again, you're completely ignoring my whole point about the CPU requirements being inaccurate. I'll repeat myself again, though I shouldn't have to - the fact that the CPU requirements are CLEARLY made up is all the evidence we need to know that these sorts of requirements in general aren't being tested thoroughly and shouldn't be taken so seriously. Just because people are only 'talking about' GPU's doesn't change this. It's not complicated.

The meaning is obvious - you even argued that TLOU:RM was essentially a jumped-up PS4 game, hence further evidence there was no cause for alarm. This isn't a half-remembered voice conversation, we can scroll my dude.
 
One benefit to this whole situation is the guaranteed entertainment from the eruption that will happen here with the inevitable NXGamer video.

Eh I dunno, I mean never count him out to make some exaggerations but it's not like there's not a consensus here, even Horizon didn't have this harsh a reaction at launch. I mean unless he tries to claim that the loading time is a result of PCI-E bandwidth or something, it's clear this at least in some important aspects, this is indeed a case of a highly optimized game for a target platform that is greatly assisted by custom hardware that wasn't available on the PC in time for development (well the hardware exists yeah, but not the API in a state which could be counted upon). It truly is in 2080ti territory for this game.

(That and the fact the activity on this site since the closure of the architectural forums has dropped significantly regardless)
 
Eh I dunno, I mean never count him out to make some exaggerations but it's not like there's not a consensus here, even Horizon didn't have this harsh a reaction at launch. I mean unless he tries to claim that the loading time is a result of PCI-E bandwidth or something, it's clear this at least in some important aspects, this is indeed a case of a highly optimized game for a target platform that is greatly assisted by custom hardware that wasn't available on the PC in time for development (well the hardware exists yeah, but not the API in a state which could be counted upon). It truly is in 2080ti territory for this game.

(That and the fact the activity on this site since the closure of the architectural forums has dropped significantly regardless)

The issue is with how it's interpreted. There are people, and NXG I'm sure will be one of them who will take this as validation that a closed console platform can achieve the performance of a far more powerful PC thanks to targeted optimisation, and will herald this as the start of new wave of games that see the PS5 performing in line with or beyond the 2080Ti/3070.

And there are others that see this as being primarily down to very poor optimisation on the PC side.

The filter I use to decide between the two is to ask the question - is it feasible for a developer to make this game, with these visuals (or better) from the ground up for PC/multiplatform while achieving better performance? For example say Epic decided to completely remake this game in UE5 with PC as an equal target from the outset. Is it realistic to think the end result couldn't perform better while looking at least as good?

I accept there is more to this than simply 'NG/Iron Galaxy did a sh*t/lazy port'. More realistically the game is so laser targeted at the console architecture and API that it was just extremely difficult to adapt that to the PC without significant code re-authoring that the budget simply didn't support. The end result of that is the code is poorly optimised for the PC, because it's effectively Playstation code, shoehorned onto the platform as opposed to more platform agnostic multiplatform code.

If every game were solely developed for the PS5 and then shoehorned onto the PC then yeah, the console probably would be regularly performing in line with or beyond the likes of the 2080Ti. Fortunately for PC gamers at least, the future of games, including Sony exclusives, seems to be in multiplatform.
 
The thing with NXG is he doesn't have the CPU power to actually do a proper technical review of the port.

I think the fastest CPU he has is a Ryzen 5 5600 which is no where near fast enough.
 
I accept there is more to this than simply 'NG/Iron Galaxy did a sh*t/lazy port'. More realistically the game is so laser targeted at the console architecture and API that it was just extremely difficult to adapt that to the PC without significant code re-authoring that the budget simply didn't support. The end result of that is the code is poorly optimised for the PC, because it's effectively Playstation code, shoehorned onto the platform as opposed to more platform agnostic multiplatform code.

Yeah I agree, that's the implicit assumption when you see a game that's 'unoptimized' - there are always alternative ways to go about implementing things, but the reality of time/budgets comes into play. Like there is probably some way to deal with the texture data with TLOU in a way that's more performant for CPU-based decoding, but the time/cost to evaluate that and repackage every texture in that format would be prohibitive.

The thing with NXG is he doesn't have the CPU power to actually do a proper technical review of the port.

I think the fastest CPU he has is a Ryzen 5 5600 which is no where near fast enough.

I think that's actually someone he has to run benchmarks on more modern hardware, I think he's still stuck with the Zen2 CPU atm.
 
The thing with NXG is he doesn't have the CPU power to actually do a proper technical review of the port.

I think the fastest CPU he has is a Ryzen 5 5600 which is no where near fast enough.
I think if he benchmarks the game with a processor that falls within the minimum/recommended spec, then he's well within his rights to properly technically review the game.. assuming he frames the review in that context with the appropriate expectations.
 
Like there is probably some way to deal with the texture data with TLOU in a way that's more performant for CPU-based decoding, but the time/cost to evaluate that and repackage every texture in that format would be prohibitive.

Yes this is indeed one of the more obvious ways that this game could have been better tailored to the PC (if time and budget had allowed) rather than shoehorning in the PS5 optimised solution. I wouldn't be surprised if NXG for example held up the CPU based Kraken decompression on PC that is no doubt contributing to the high CPU requirements vs the hardware based equivalent on PS5 as a shining advantage of closed systems. Where-as the optimal solution for PC would obviously have been to compress all the GPU based assets with GDeflate and GPU decompress them via DirectStorage which would save CPU power both in general IO overhead as well as decompression.
 
Status
Not open for further replies.
Back
Top