Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
It doesn't? I toggle it on and off and get the exact same performance.

Yeah true triple buffered Vsync below the refresh does not cost performance in any typical sense in a scene with a flat frame-rate (it just changes the way frames are paced). Also it is literally impossible to show any performance above 60 fps/60hz if you match the output refresh rate with Vsync on, so...what would be the point?
 
Yeah true triple buffered Vsync below the refresh does not cost performance in any typical sense in a scene with a flat frame-rate (it just changes the way frames are paced). Also it is literally impossible to show any performance above 60 fps/60hz if you match the output refresh rate with Vsync on, so...what would be the point?

It shouldn't, but in my tests with this game the vsync, at least at 60hz, can have some rare drops into 80% GPU utilization with it trying to hover around 40fps at points when vsync is enabled, but most of the time it's in the low 90% utilization, losing you ~5fps if you require more than 90% GPU usage over non-vsync. It's kind of all over the place. Very difficult to maintain a 60fps lock unless you're always below 80% GPU usage.

Usually, the performance delta between DLSS and FSR is non-existent. HU did a few tests and in select cases, one may perform a bit better than the other but this is really rare. They're within 1-2fps of one another generally.

There's a 2fps advantage to DLSS balanced vs FSR2 balanced on my 3060, which Alex mentions in the video. With a 1440p output res, FSR2 has a slightly higher internal res - 1505 x 847, vs 1485 x 835 on DLSS balanced.
 
Last edited:
It shouldn't, but in my tests with this game the vsync, at least at 60hz, can have some rare drops into 80% GPU utilization with it trying to hover around 40fps at points when vsync is enabled, but most of the time it's in the low 90% utilization, losing you ~5fps if you require more than 90% GPU usage over non-vsync. It's kind of all over the place. Very difficult to maintain a 60fps lock unless you're always below 80% GPU usage.



There's a 2fps advantage to DLSS balanced vs FSR2 balanced on my 3060, which Alex mentions in the video. With a 1440p output res, FSR2 has a slightly higher internal res - 1505 x 847, vs 1485 x 835 on DLSS balanced.
Thanks for this fairer comparison. This is how NXGamer usually compares PS5 vs PC and this is mostly why he doesn't get the same results as Digital Foundry.
 
Thanks for this fairer comparison.

It's not a 'fair' comparison though, as the vsync is bugged. That is not how vsync operates in 95% of games, and Alex is right that vsync doesn't inherently cause any significant performance losses. I'm just pointing that it's an issue currently with AW2, not with vsync in general. It's an issue that should get attention yes, but it's not one that you use as platform contention fodder if it's a software problem, outside of pointing out the PC has a performance issue with vsync and that may hamper the port currently.

This is how NXGamer usually compares PS5 vs PC and this is mostly why he doesn't get the same results as Digital Foundry.

Oh please. When his GPU isn't being throttled due to overheating for one, he doesn't get the 'same results' (whatever that means) as DF because he fucks up often and draws incorrect conclusions based on limited knowledge, and ultimately they just have different goals with their coverage. Alex's videos on PC ports are a comprehensive review of the technical merits of the PC version and what you need to do in order to optimize it to its fullest given the options available, the PS5/SX are brought in as a baseline because those are the platforms where the developer usually makes the most considered cutbacks to achieve a performance target and they often translate well to midrange PC hardware. NXGamer doesn't really do that - his videos are pure platform contention, they're designed to showcase to his (often incorrect) understanding of the advantages of a particular architecture.

If you notice a PC port has a fucked up vsync that seriously affects performance, you should mention it of course (I personally think games that mess this up should receive more attention that they do) - but you do it in the context of 'this is improper behavior that shouldn't be working this way'. Since you brought in NxGamer, a perfect example of covering this improperly was his perf comparison of A Plague's Tale:Innocence between PC and PS5 - in the segment screenshotted below, he's talking about how much better the PS5 is performing here, but from his own overlay you can clearly see his 2070 is at 85% usage, delivering 43fps with vsync. That should be a clear indicator that something is not right, this is absolutely not normal vsync behavior, and in this case being DX11, it's a relatively easy fix by enabling Fast vsync to boot.

(This is actually extremely similar to how AW2 works with vsync, but being DX12 there's no easy solution except to hope for a patch)

You shouldn't have to do this in any game mind you, and it's a knock on the platform/game that you have to - bring it up if you want as a point for the hassle of PC gaming, fine. But if you care about the most accurate representation of the platforms, you at least notice what's staring at you right in the face.

1698536428211.png
 
Last edited:
Something cool about this game is how willing it is to throw different environments with radically different performance requirements at the player. I think it explains a lot of the official recommended specs -- I'm not playing with an fps counter up, but there are *huge* gulfs between scenes in dense foliage with complex lighting vs simple interiors and urban scenes. There are some scenes the 3080 runs fairly OK, and the game frontloads a lot of the ones that it doesn't.
 
Something cool about this game is how willing it is to throw different environments with radically different performance requirements at the player. I think it explains a lot of the official recommended specs -- I'm not playing with an fps counter up, but there are *huge* gulfs between scenes in dense foliage with complex lighting vs simple interiors and urban scenes. There are some scenes the 3080 runs fairly OK, and the game frontloads a lot of the ones that it doesn't.

I get about a 30% performance increase in town vs the forest.

The biggest performance hits I’ve found are post processing, volumetrics and shadow resolution.

On a 3080 ray tracing isn’t viable for me. The low preset might work and stay above 60 with dlss, but I’d rather have better performance. Game honestly looks amazing without ray tracing even on low settings.
 
I think it explains a lot of the official recommended specs

The devs actually posted somewhere that they deliberately went heavy on the requirements in order to guarantee a particular performance level. "Under promise and over deliver" I believe their words were (sorry I don't have a link to the post itself).
 
It shouldn't, but in my tests with this game the vsync, at least at 60hz, can have some rare drops into 80% GPU utilization with it trying to hover around 40fps at points when vsync is enabled, but most of the time it's in the low 90% utilization, losing you ~5fps if you require more than 90% GPU usage over non-vsync. It's kind of all over the place. Very difficult to maintain a 60fps lock unless you're always below 80% GPU usage.



There's a 2fps advantage to DLSS balanced vs FSR2 balanced on my 3060, which Alex mentions in the video. With a 1440p output res, FSR2 has a slightly higher internal res - 1505 x 847, vs 1485 x 835 on DLSS balanced.
Would you mind uploading a Spot Video where that occurs in game? I wanna See If I can reproducd
 
The devs actually posted somewhere that they deliberately went heavy on the requirements in order to guarantee a particular performance level. "Under promise and over deliver" I believe their words were (sorry I don't have a link to the post itself).

Which is how it should be. The worst thing they can do is release specs that generally hit the targets but then tank in worst case areas of the game. It's better to recommend something that can be guaranteed for the entirety of the game. I think they should have said "30 fps minimum" and "60 fps minimum" to let people know performance could in some cases be higher.
 
Would you mind uploading a Spot Video where that occurs in game? I wanna See If I can reproducd

It's hyperlinked in my post. Happens all the time in the forest area, basically just any place where you're right on the edge of needing high 90's for 60fps will see vsync throttle this back.

Here's a more stressful scenario by using console settings at 4K output/DLSS performance. You can see how vsync can dip the game into the low 80's for utilization at points and seemingly wants to fight around that 40fps barrier.

Bear in mind it's not perfect utilization without vsync either, it's far better but I see it did into the mid 90's a fair bit without vsync too. Perhaps a single threaded performance limitation on a 12400f? Dunno.
 
Finished building my new PC so fired up Alan Wake 2 and wondering if the games supports Ada's OMM??? Would explain why the forest section is so heavy vs the rest of the game.

@Dictator - any idea?

Edit: Thinking about it, as they're using mesh shaders I wonder if they've made all of the foliage out of actual geometry and there's just a shit load of polygons to ray test.
 
Last edited:
Anyone knows how the PS5 version was done since theoretically at least has support only for Primitive Shaders and NOT Mesh Shaders?
 
Anyone knows how the PS5 version was done since theoretically at least has support only for Primitive Shaders and NOT Mesh Shaders?
They probably used primitive shaders since it already offered most of the existing functionality with mesh shaders. If they used amplification shaders too then they likely emulated them with a compute shader pre-pass but even on RDNA2, amplification shaders are also emulated over there too to an extent. Most of the time there's no elaborate emulation or workarounds to shipping a game using mesh shaders ...
 
Anyone knows how the PS5 version was done since theoretically at least has support only for Primitive Shaders and NOT Mesh Shaders?
This was discussed not so long ago


If I understand correctly everything what was said (probably not) mesh shaders can be implemented by the use of the primitive shaders. In PC space this can be problem if the driver don’t expose primitive shader (wich seems to be case for older rdna1 gpus). But PS5 APIs are different story and they have access to it. So theoretically they can implement mesh shaders using it.
 
They probably used primitive shaders since it already offered most of the existing functionality with mesh shaders. If they used amplification shaders too then they likely emulated them with a compute shader pre-pass but even on RDNA2, amplification shaders are also emulated over there too to an extent. Most of the time there's no elaborate emulation or workarounds to shipping a game using mesh shaders ...
Amplification shader = task shader According to timur blog

“First things fist. Under the hood, task shaders are compiled to a plain old compute shader”

 
I have a 2080 Ti i can drop in later
bench.00_10_09_56.stiyvcam.png


In case anyone is curious - all my performance capture from the video and here are Vsync off on PC running on the Ryzen 5 3600:
(this is an RTX 2080 Ti screenshot)
bench.00_12_21_36.stix8eqe.png
 
Status
Not open for further replies.
Back
Top