Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
I checked Outriders ps5 demo and have to say tough game is not a looker its quite sharp. I really like effects of dynamic resolution here and also in Valhalla, works really good, definitely better than for example 1440p checkerboard in Deathstranding.
 
The cibematics have Animation stutter like I mention in the video like, but not necessarily frame drops like at release

Yeah, at one point you made a direct comparaison saying 1.10 is smooth now, but I was thinking "oh, 1.10 looks way worse from here".

Anyway, like you mentioned the engine was never designed to be used on pc (not for this game anyway). It's still not really good imo...
 

nice, I think I would pick this up now on PC had I not already purchased a PS5.
The major issues look resolved here.

Look at those AF compares!

These are seriously impressive improvements. Kudos to the devs for dealing with pretty much all of the issues, it's rare to see such post launch dedication. It just so happens I've owned this game since it was launched, but I've never played it. This makes me happy!
 
This seems to be a bit of a trend across a bunch of recent titles and I'm wondering why. This has to be something that is wholly in the control of the developer, you can be as optimistic or conservative as you like regarding your prediction for your target resolution fitting in your frame time.

So weird. :-|

Maybe the devs thinks that VRR on Xbox platforms cleans up the framerate so they don't have to spend the time optimizing.
 
Maybe the devs thinks that VRR on Xbox platforms cleans up the framerate so they don't have to spend the time optimizing.

I'm thinking this or they just figure that some light post patch support will clean it up. I do think that when Direct Storage is fully running, these frame rate issues will be a thing of the past.
 
Maybe the devs thinks that VRR on Xbox platforms cleans up the framerate so they don't have to spend the time optimizing.

As a developer, you've got to be checking this, right? It surely cannot be coincidence that a bunch of recent multi-platform games are near-locked at 60fps but at a generally lower dynamic resolution than XSX. The whole point of a dynamic resolution is to hit your frame rate budget, why is this working on PS5 builds but not XSX builds? ¯\_(ツ)_/¯
 
As a developer, you've got to be checking this, right? It surely cannot be coincidence that a bunch of recent multi-platform games are near-locked at 60fps but at a generally lower dynamic resolution than XSX. The whole point of a dynamic resolution is to hit your frame rate budget, why is this working on PS5 builds but not XSX builds? ¯\_(ツ)_/¯
on the other hand cold war has proper resolution scaling on xsx and bug on ps5
 
Maybe the devs thinks that VRR on Xbox platforms cleans up the framerate so they don't have to spend the time optimizing.
lmao, how many percent people have vrr tv ? 5 ? ;d it will change in the future but highly doubt its devs mindset ;)
 
Perhaps MS plans to free up resources in a software update and these titles that focus on resolution will be able to take advatage?
 
Perhaps MS plans to free up resources in a software update and these titles that focus on resolution will be able to take advatage?
If something is broken in software then they can fix it in software. Of the two platforms, in terms of being able to predict performance and hitting your target frame rate I would have put my money on the variable-clocked PS5 being more tricky. :yep2:

Devs can fix this on XSX by better tuning their dynamic resolution scaling formula. Just make it a little more pessimistic.
 
If something is broken in software then they can fix it in software. Of the two platforms, in terms of being able to predict performance and hitting your target frame rate I would have put my money on the variable-clocked PS5 being more tricky. :yep2:

Devs can fix this on XSX by better tuning their dynamic resolution scaling formula. Just make it a little more pessimistic.

This is what I find so frustrating. The whole point of the setup of the PS5 is to make performance and power consumption predictable. This was very clear in Cerny’s presentation. It may seem a logical connection between variable clocks and so on but in reality it results in more stability and predictability not less.
 
This is what I find so frustrating. The whole point of the setup of the PS5 is to make performance and power consumption predictable. This was very clear in Cerny’s presentation. It may seem a logical connection between variable clocks and so on but in reality it results in more stability and predictability not less.
The thing is, if something is variable you can't always count on the performance. So dynamic resolution must be a bit more aggressive to leave a bit overhead if the performance drops for a while.
The worse performance of the xbox might also be there because of dev-kits. I don't mean consoles, that are just "dev-kit"-enabled, I mean real dev-kit consoles. Normally those have more Memory (and therefore often bandwidth) and also a bit more power. I can imagine that MS must still work on their power-profile from those "PC"-like dev-kits. Maybe the profile on these consoles is a bit to extreme. On the other hand, I would still guess that those are still software/SDK problems. It is often surprising that we don't see some of those problems on similar PC hardware with RDNA1. The hardware in the xbox should be capable of more. It shouldn't be the hardware (if there isn't a surprising bug in it)
 
The thing is, if something is variable you can't always count on the performance. So dynamic resolution must be a bit more aggressive to leave a bit overhead if the performance drops for a while.
Or another perspective, with locked clocks, the harder you have to run it. Boost is typically handled by moving the power away from idle transistors towards increasing clock speeds when load is low. As load increases power requirements increase as well, you are eventually going to be bound thermally or bound by power draw, so to keep handling an increased load frequency must go down. This in some ways makes it predictable if you having issues taking advantage of the saturation of the hardware.

If you go with locked clocks however, if you run light loads, the GPU is not leveraged and the idle power does not go towards increased clock speed. So we may be seeing scenarios where developers have identified issues where resolution increased or not has the same resulting performance, so they just increase it. If you consider boost clocks as trying to be optimal at all times, where the boost will eventually identify a bottleneck somewhere, locked clocks will suffer from bottlenecks in different areas which means developers are forced to develop to maximize each part of the rendering pipeline.

What I'm trying to say is, developers should know if reducing resolution would improve performance, and if it's not, they may as well push on other fronts if some other bottleneck is keeping the system down from running faster.
 
Or another perspective, with locked clocks, the harder you have to run it.
Yup. Although it is seems counter-intuitive, having a degree of flexibility for the CPU or the GPU to tilt the performance see-saw makes perfect sense from the power/thermal angle. And let's not forget that in terms of variable clocks, AMD and Intel CPUs, and AMD, Intel and Nvidia GPUs have done this for more than a decade. It does not make any sense to run silicon at high frequencies when under-utilised and devs are more than used to just feeding work to the CPU or GPU and having it clock up and down - within a clock/performance envelope - as necessary.

Game consoles were the last hold-out of devices using fixed clocks.

But back to games, quite a few game's dynamic resolution systems are failing on XSX compared to PS5. Why are these systems correctly lowering the resolution lower on PS5 to better hit 60 and not on XSX? It makes you wonder if there are other dev-tuned weightings built in preventing the system doing what it's supposed to do. But why would you do that? Why not let the system free-wheel and find it's only natural equilibrium? ¯\_(ツ)_/¯.
 
Yup. Although it is seems counter-intuitive, having a degree of flexibility for the CPU or the GPU to tilt the performance see-saw makes perfect sense from the power/thermal angle. And let's not forget that in terms of variable clocks, AMD and Intel CPUs, and AMD, Intel and Nvidia GPUs have done this for more than a decade. It does not make any sense to run silicon at high frequencies when under-utilised and devs are more than used to just feeding work to the CPU or GPU and having it clock up and down - within a clock/performance envelope - as necessary.

Game consoles were the last hold-out of devices using fixed clocks.

But back to games, quite a few game's dynamic resolution systems are failing on XSX compared to PS5. Why are these systems correctly lowering the resolution lower on PS5 to better hit 60 and not on XSX? It makes you wonder if there are other dev-tuned weightings built in preventing the system doing what it's supposed to do. But why would you do that? Why not let the system free-wheel and find it's only natural equilibrium? ¯\_(ツ)_/¯.
Layman talk, could it just not be the variable clocks of PS5 showing an ‘advantage’ in that the bottleneck gets the boost it needs quicker than the resolution can scale?
 
Layman talk, could it just not be the variable clocks of PS5 showing an ‘advantage’ in that the bottleneck gets the boost it needs quicker than the resolution can scale?
This is Cerny's 'a rising tide lifts all boats' comment in action.
 
Status
Not open for further replies.
Back
Top