Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
While the display output resolution support of the PS5 discussion is fascinating I don't know what that has to do with the internal rendering resolution of a game. Games render at an arbitrary resolution, determined by the developers and this is then scaled to whatever display resolution is supported by the console and the TV.

I am not even sure why this is a conversation at this moment just because the PS5 version of THPS renders at 1080p/120 while the XSX renders at 1440p. I mean you can debate, argue, and have a conversation about why the developers made such choices for the two consoles but to say it is because the PS5 does not support 1440p as an output resolution flies in the face of all the games release since the launch of these consoles, more or less.
It's pretty clear why this excuse is being used. It puts the PS5 in a better light even tho historically it makes no sense.
 
On top of the actual tech, I am sorta surprised by such a big resolution hit on thps.

Maybe we should wait and see how they perform. Or maybe the theory that the shared power really is a big bottleneck is true, this would make sense for it.

Cod handled 120fps fine and thps is at its core a very old game, but the cod engine is a much more modern engine, and thps is on an aging ue4. Maybe there's just more overhead.
 
the cod engine is a much more modern engine, and thps is on an aging ue4. Maybe there's just more overhead.

Or thps does things differently, we might see more of this going forward.
AAA devs/exclusive games might hide or have better workarounds for the variable clocking.
 
Recall:
PS4-GPU-Bandwidth-140-not-176-635x358.png


Available bandwidth is likely the largest factor here.
Consoles need to compete with CPU bandwidth and PC's don't, so making cross comparisons to PC benchmarks may not always work if you're looking strictly at clockspeeds/TF. Except in scenarios where bandwidth is not the bottleneck.

If you have a CPU that say normally takes 10GB/s at 30fps (heavier CPU titles will eat more, more NPCs = more CPU action), it becomes 40GB/s at 120fps. That combined with the disproportional way that the more bandwidth CPU eats up, there is even less for GPU as per the older PS4 slide, the GPU is starved for bandwidth.

This is likely the main culprit for resolution loss before you look at clock speeds, fill rates etc.

So 448GB/s becomes 408GB/s off straight subtraction, then take even more off even more for disproportion and you have a GPU that is unlikely able to hold higher resolutions at higher frame rates. You're going to be well sub 380GB/s.

Then you couple that with higher CPU usage pulling back any sort of smart shift towards the GPU, and you're in a scenario where PS5 cannot produce superior results compared to something like 5700XT.

5700XT will have all 448GB/s to itself. CPU completely unaffected.

These are typical challenges for game developers working with consoles, and not every developer is capable of maximizing the hardware for every games the same ways as some of our best AAA teams can.

4K60 and 1440P@120 are hard targets for consoles to hit (due to the sharing with CPU factor) and will increase in difficulty as the generation goes on. Aside from memory footprint concerns, which SSDs mitigate, there has not been much added hardware to mitigate the bandwidth bottleneck. The more computation you are doing, aka, saturating the CUs, ultimately means you're pulling more bandwidth as you've got to write the results somewhere. The bandwidth will likely be the ceiling here for both. they will need to come up with creative ways to reduce the bandwidth pressure, and that likely means some creative up-sampling techniques.
is 1440p 120fps more bandwidth demanding than 2160p 60fps ?
 
Yes and no. No for the GPU (as 1440p120 is the same pixelcount as 2160p60, but the CPU needs also bandwidth and must do twice the work. So yes, the CPU needs more bandwdith and more power.
Clearly more, right? It's the same (well, 1440x120 is very slightly less) pixel count, but bandwidth isnt only used to send the framebuffer around, other assets are also getting sent around twice as fast. (Unless i'm wildly misunderstanding how this works). Bandwith cost is probably considerably higher overall.
 
is 1440p 120fps more bandwidth demanding than 2160p 60fps ?
on a console, yes. You've doubled your CPU bandwidth requirement, and further increased the disproportionate loss on total bandwidth.
Reducing resolution by 50% doens't necessarily reduce the bandwidth requirement by 50%. Just like CU scaling as you see, doesn't necessarily increase things linearly, reducing the workloads doesn't increase bandwidth requirements the same way because of compression and other factors, but if you've doubled the frame rate, that absolutely doubles bandwidth.

I would lean towards 1440p@120 requires more juice on the system than 4K@60 strictly from a bandwidth perspective. That's not to say higher resolutions don't require more bandwidth (they certainly do), but if you're reducing res and doubling frame rate, it's an interesting question to investigate. Some titles may have more impact than others when it comes to resolution reduction and it's impact on bandwidth. But as a general statement; increasing resolution tends to only increase the requirements on parts of the rendering pipeline. Doubling the framerate increases requirements on the whole rendering pipeline as you do everything twice. That's a true doubling.

Looking at the past, we've seen 4K60 locked and drop to a inconsistent 1080-1440p @120. Most games attempting a 120fps mode, often miss that locked target. I would call 120fps fairly aspirational for most titles on consoles.

Typically when bandwidth numbers are posted, they are best case scenario. In this case, all writes followed by all reads will get you to near maximum. The reality is that a mixture of read/write eats heavily into your bandwidth, and this is another additional issue that consoles struggle with. The CPU is constantly reading and writing to memory at the same time the GPU is. I suspect this is large part of what the disproportionate bandwidth loss is between CPU/GPU - but it could be something else entirely. I'm not sure.
 
Last edited:
Also there are parts of the rendering pipeline that don't scale with resolution so they take up a flat portion of your frame time budget. So doubling the framerate often requires more than double the gpu power per pixel.
 
on a console, yes. You've doubled your CPU bandwidth requirement, and further increased the disproportionate loss on total bandwidth.
Reducing resolution by 50% doens't necessarily reduce the bandwidth requirement by 50%. Just like CU scaling as you see, doesn't necessarily increase things linearly, reducing the workloads doesn't increase bandwidth requirements the same way because of compression and other factors, but if you've doubled the frame rate, that absolutely doubles bandwidth.

I would lean towards 1440p@120 requires more juice on the system than 4K@60 strictly from a bandwidth perspective. That's not to say higher resolutions don't require more bandwidth (they certainly do), but if you're reducing res and doubling frame rate, it's an interesting question to investigate. Some titles may have more impact than others when it comes to resolution reduction and it's impact on bandwidth. But as a general statement; increasing resolution tends to only increase the requirements on parts of the rendering pipeline. Doubling the framerate increases requirements on the whole rendering pipeline as you do everything twice. That's a true doubling.

Looking at the past, we've seen 4K60 locked and drop to a inconsistent 1080-1440p @120. Most games attempting a 120fps mode, often miss that locked target. I would call 120fps fairly aspirational for most titles on consoles.

Typically when bandwidth numbers are posted, they are best case scenario. In this case, all writes followed by all reads will get you to near maximum. The reality is that a mixture of read/write eats heavily into your bandwidth, and this is another additional issue that consoles struggle with. The CPU is constantly reading and writing to memory at the same time the GPU is. I suspect this is large part of what the disproportionate bandwidth loss is between CPU/GPU - but it could be something else entirely. I'm not sure.
this even make sense and is possible but not so much in tony hawk
on max settings 1440p gtx1660super with 336gb/s bandwith is close to 120fps
 
this even make sense and is possible but not so much in tony hawk
on max settings 1440p gtx1660super with 336gb/s bandwith is close to 120fps

Huge difference between close and locked 120fps. Also the PC version does not have the next gen upgrades.
 
What next gen upgrades does this game have?

A true next-gen experience is coming to the PlayStation 5 and Xbox Series X|S featuring improved resolution, spatial audio, high-fidelity atmospherics, and more.

Skate in super crisp 120 FPS at 1080P, or 60 FPS in native 4K*. Watch the levels come to life like never before with sharper dynamic shadows, reflections, and lens flare, plus enhanced skater textures and more.
 
this even make sense and is possible but not so much in tony hawk
on max settings 1440p gtx1660super with 336gb/s bandwith is close to 120fps
I mean, the settings chosen by developers have to run in alignment with the whole game. Not just a track or two right.
All the footage you've chosen regularly dips below 120 once he stops facing a wall doing 360s. As soon as he rides out to see a greater landscape you see sub 120.
And this isn't taking into account anything like enhanced settings on consoles etc.

Generally speaking, locked 120, means the lowest the game will dip to, is 120... that's sort of the idea at least. Having a 120fps mode that never reaches 120 for at least 98% of the frames sort seems off.
 
I mean, the settings chosen by developers have to run in alignment with the whole game. Not just a track or two right.
All the footage you've chosen regularly dips below 120 once he stops facing a wall doing 360s. As soon as he rides out to see a greater landscape you see sub 120.
And this isn't taking into account anything like enhanced settings on consoles etc.

Generally speaking, locked 120, means the lowest the game will dip to, is 120... that's sort of the idea at least. Having a 120fps mode that never reaches 120 for at least 98% of the frames sort seems off.
check my post https://forum.beyond3d.com/posts/2197625/ (120+ on card with 256.3 GB/s bandwidth)
 
check my post https://forum.beyond3d.com/posts/2197625/ (120+ on card with 256.3 GB/s bandwidth)
I watched that video. It drops nearly below 100 at times once the game starts to pan out. Sure staring at a wall doing 360s is netting him 150fps, but once he gets out to see more, it drops dramatically.

Using random footage from other GPUs to isolate a single thing here or there isn't going to help you diagnose the issue with PS5. It's like saying a Ford F150 truck beats a Porsche Cayenne in a race only when they are towing loads but being surprised when the porsche beats it in a race without towing.

Nothing you're doing in terms of comparisons here will help resolve the issue for it being 1080p for PS5. It is what it is. I'm sure if it could hit 1440p@120 they would have chosen to. It's developer call on console, and it's always been like this. And I've said it before, many times now, but these types of separations happen all the time that may or may not be reflective of the hardware capabilities. But to assume they could crank the settings higher but chose not to is, pretty dumb. They want sales as much as they can get, developers often aim for parity when they can get it.
 
I watched that video. It drops nearly below 100 at times once the game starts to pan out. Sure staring at a wall doing 360s is netting him 150fps, but once he gets out to see more, it drops dramatically.

Using random footage from other GPUs to isolate a single thing here or there isn't going to help you diagnose the issue with PS5. It's like saying a Ford F150 truck beats a Porsche Cayenne in a race only when they are towing loads but being surprised when the porsche beats it in a race without towing.

Nothing you're doing in terms of comparisons here will help resolve the issue for it being 1080p for PS5. It is what it is. I'm sure if it could hit 1440p@120 they would have chosen to. It's developer call on console, and it's always been like this. And I've said it before, many times now, but these types of separations happen all the time that may or may not be reflective of the hardware capabilities. But to assume they could crank the settings higher but chose not to is, pretty dumb. They want sales as much as they can get, developers often aim for parity when they can get it.
but you know that ps5 gpu is diffinitly more capable than 1070 (5700xt is ~1.4x faster) and also even on consoles 120fps modes has drops to around 100fps in other games ;) I think that your theory about lacking of bandwidth (448) is not correct when it seems 256gb/s is enough in tony hawk
 
but you know that ps5 gpu is diffinitly more capable than 1070 and also even on consoles 120fps modes has drops to around 100fps in other games ;) I think that your theory about lacking of bandwidth (448) is not correct when it seems 256gb/s is enough in tony hawk
By that argument, 12TF, 560GB/s, 16GB/s memory can only do as well as that 1070 as well.
None of it is sensible when you compare things to PC, especially if you aren't doing serious investigation and benchmarking. You're just shooting in the dark ultimately.

I don't know what else to say. There's only so many factors that can be attributed to performance.
ALU, Frequency, Bandwidth and Storage.

Outside of that is just poor programming (game) or poor performance from drivers (whatever kit they are using)
 
Status
Not open for further replies.
Back
Top