Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
When they showed UE5 tech demo on ps5 we all wondered how the games will look like. Now we know. Remnant 2 now this. Disappointing performance and blurry visuals.
We need to see what Ninja Theory or especially The Coalition will bring. The latter are masters at using the Unreal Engine.
 

The Series S version looks like when Overwatch first came out and I was playing it on a macbook with an integrated intel gpu.

Interestingly, the performance advantage for XSX over PS5 (in stress points) is approaching the difference in compute and bandwidth. Nanite and Lumen seem good candidates for leveraging compute and those transparencies might be very bandwidth hungry.

The Series S version resolution though ... it might be nice if it had a 30 fps mode.

So 720p with drops into the 40's (on PS5). All of a sudden the 4090 being unable to lock 60fps at 4k (9x the resolution) doesn't seem so bad!

The PC specs make a bit more sense now insofar as they are also tagetting 60fps/720p with the 5700x / 2080S.

It'll be interesting to see how the settings presets compare though.

There's one point where the PS5 even dips into the mid 30s. This game is super demanding.

Be glad for DLSS if you've got it!
 
I've just watched the video and wow, 720p and they still have pretty serious frame rate drops.

If this is what's required to get to 60fps on console in UE5 games then they all need to be capped to 30fps so they can get the resolution up.

And regarding FSR3, it simply won't deliver good result here as the base image is terrible and the game drops below the recommended 60fps base frame rate.

From the UE5 docs, nanite should take roughly 4.5ms at 1400p, and lumen should take 4ms at 1080p with the lumen gi and lumen reflections set to the high preset on a PS5.

Lumen targets 30 and 60 frames per second (fps) on consoles with 8ms and 4ms frame budgets at 1080p for global illumination and reflections on opaque and translucent materials, and volumetric fog. The engine uses preconfigured Scalability settings to control Lumen's target FPS. The Epic scalability level targets 30 fps. The High scalability level targets 60 fps.

When considering these GPU times together, it's approximately 4.5ms combined for what would be equivalent to Unreal Engine 4's depth prepass plus the base pass. This makes Nanite well-suited for game projects targeting 60 FPS.

I know those numbers are just estimates, but they've got maybe 50% of their frame budget used up if they were at 1080p. Don't know how you get from that to 720p. I'm sure there's a lot of discovery going on about performance pitfalls with the new tech.
 
From the UE5 docs, nanite should take roughly 4.5ms at 1400p, and lumen should take 4ms at 1080p with the lumen gi and lumen reflections set to the high preset on a PS5.

I know those numbers are just estimates, but they've got maybe 50% of their frame budget used up if they were at 1080p. Don't know how you get from that to 720p. I'm sure there's a lot of discovery going on about performance pitfalls with the new tech.

The Matrix UE5 demo on PS5/XSX was ~900p at (an unstable) 30fps.

So with this game being (an unstable) 60fps, 720p is about right when factoring in the resolution and frame rate in The Matrix demo.
 
I think Tom somehow erred in PS5 resolution count. Settings aside, this PC vs PS5 comparison suggests to me PS5 is likely around 1080p internal res. Surely not 720p.

 
I think Tom somehow erred in PS5 resolution count. Settings aside, this PC vs PS5 comparison suggests to me PS5 is likely around 1080p internal res. Surely not 720p.


Far more likely it's a simple sharpening difference between the consoles.

Neither the performance profile on PC or XBSX even remotely support the PS5 running at 1080p which is 2.25x more pixels than the Series X.

Also that video lol... "PS5 is running at 1800p internal".
 
Far more likely it's a simple sharpening difference between the consoles.

Neither the performance profile on PC or XBSX even remotely support the PS5 running at 1080p which is 2.25x more pixels than the Series X.

Also that video lol... "PS5 is running at 1800p internal".

Lol, wtf is this channel.

PS5 does look sharper in the side-by-sides whether that's from resolution or extra sharpening. (In the DF video)
 
The Matrix UE5 demo on PS5/XSX was ~900p at (an unstable) 30fps.

So with this game being (an unstable) 60fps, 720p is about right when factoring in the resolution and frame rate in The Matrix demo.

The Matrix demo wasn't UE5.1 or 5.2 though, right? I believe Lumen had some major changes to support 60fps on console, and maybe nanite did as well.

Fortnite averages 55% of 4K at 60 fps with all of the lumen, nanite features. And despite what people say it's a pretty geometrically complex game now that they've redone assets for nanite. The blades of grass or nanite meshes, the small rocks on the ground, the bricks in a wall, the tiles on a roof etc.
 
Last edited:
Far more likely it's a simple sharpening difference between the consoles.

Neither the performance profile on PC or XBSX even remotely support the PS5 running at 1080p which is 2.25x more pixels than the Series X.

Also that video lol... "PS5 is running at 1800p internal".

Obviously a typo by the video poster.

My point is next to the PC running at native 1440p, the PS5 does not look like a FSR2 4k output upscaled from a 720p base. And if it is, then I say that is a relatively excellent result. I also didn't see any image sharpening artifacts in the DF video or this one. Curious as to what's truly going on here. To me the PS5/XSX difference is almost as noticeable as resolution difference between the SS and SX.
 
The Matrix demo wasn't UE5.1 or 5.2 though, right? I believe Lumen had some major changes to support 60fps on console, and maybe nanite did as well.
It's not just those though.

The game is using Niagara and also chucking a shit load of bandwidth sucking particles around the screen.

I'm not sure of the performance cost of Niagara?

And wasn't the 60fps mode using a reduced SDF for Lumen?

Do we know if Aveum is using SDF for Lumen?
 
It's not just those though.

The game is using Niagara and also chucking a shit load of bandwidth sucking particles around the screen.

I'm not sure of the performance cost of Niagara?

And wasn't the 60fps mode using a reduced SDF for Lumen?

Do we know if Aveum is using SDF for Lumen?

60fps mode uses software lumen for sure, I think at the high preset:

High scalability level disables Detail Traces, and Lumen traces a single merged Global Distance Field instead of individual Mesh Distance Fields. Tracing the global distance field makes tracing independent from the number of instances and their overlap with other instances. It's also a great fit for 60 fps games and content with a large amount of overlapping instances.

This should be the preset that Aveum would be using. Of course there are a lot of parameters they can tweak if they want, and they may not be using a preset, but it should be something close to the high preset for 60 fps on PS5/Series X.

I have no idea about Niagara performance.
 
Hello! Sorry absolutely right, it is sharper on PS5. I've updated the article here to reflect the point - it's a fair criticism, and my fault for leaning on the numbers too much.

The data is correct (both are 720p native in every shot tested), but the figures only really explain so much. It could be any number of factors: different post processing, AMD's CAS, or even VRS. I'm now asking the team what the reason is for the difference.

It’s 720p on both the high-end machines.

https://www.eurogamer.net/community/profile/72e4kp
 
Last edited:
This quote from the DF article



That makes me think two things.

1. The older GPU's this works on might see a massively reduced uplift as they won't have the spare compute available to process the OFA data.

2. How much of an improvement are compute heavy engines like UE5 going to see even in big RDNA3 GPU's?

The OFA might become a bottleneck and limit how much of a boost FSR3 gives, and remember they also use compute for RT work too.

So AMD GPU's will have graphics, RT and now frame gen all fighting for those compute cycles.

At least the OF data on Nvidia is a dedicated hardware unit so shouldn't cause a bottleneck.
About the async compute, this is the literally the last pass of a frame, maybe they can async compute with next frame's rasterization pass? Like shadowmap rendering and such (still your point of UE5 being compute heavy is valid, but for other engines this might work -- and UE5 itself still gets hardware rasterization as well.
 
Status
Not open for further replies.
Back
Top