We need to see what Ninja Theory or especially The Coalition will bring. The latter are masters at using the Unreal Engine.When they showed UE5 tech demo on ps5 we all wondered how the games will look like. Now we know. Remnant 2 now this. Disappointing performance and blurry visuals.
The Series S version looks like when Overwatch first came out and I was playing it on a macbook with an integrated intel gpu.
So 720p with drops into the 40's (on PS5). All of a sudden the 4090 being unable to lock 60fps at 4k (9x the resolution) doesn't seem so bad!
The PC specs make a bit more sense now insofar as they are also tagetting 60fps/720p with the 5700x / 2080S.
It'll be interesting to see how the settings presets compare though.
I've just watched the video and wow, 720p and they still have pretty serious frame rate drops.
If this is what's required to get to 60fps on console in UE5 games then they all need to be capped to 30fps so they can get the resolution up.
And regarding FSR3, it simply won't deliver good result here as the base image is terrible and the game drops below the recommended 60fps base frame rate.
Lumen targets 30 and 60 frames per second (fps) on consoles with 8ms and 4ms frame budgets at 1080p for global illumination and reflections on opaque and translucent materials, and volumetric fog. The engine uses preconfigured Scalability settings to control Lumen's target FPS. The Epic scalability level targets 30 fps. The High scalability level targets 60 fps.
When considering these GPU times together, it's approximately 4.5ms combined for what would be equivalent to Unreal Engine 4's depth prepass plus the base pass. This makes Nanite well-suited for game projects targeting 60 FPS.
there were lots of ugly and bad performiing UE4 games, and some beautiful ones.When they showed UE5 tech demo on ps5 we all wondered how the games will look like. Now we know. Remnant 2 now this. Disappointing performance and blurry visuals.
Let’s hope Lords of Fallen 2 impresses us all and make me feel like a fool for posting that comment.there were lots of ugly and bad performiing UE4 games, and some beautiful ones.
From the UE5 docs, nanite should take roughly 4.5ms at 1400p, and lumen should take 4ms at 1080p with the lumen gi and lumen reflections set to the high preset on a PS5.
I know those numbers are just estimates, but they've got maybe 50% of their frame budget used up if they were at 1080p. Don't know how you get from that to 720p. I'm sure there's a lot of discovery going on about performance pitfalls with the new tech.
I think Tom somehow erred in PS5 resolution count. Settings aside, this PC vs PS5 comparison suggests to me PS5 is likely around 1080p internal res. Surely not 720p.
Far more likely it's a simple sharpening difference between the consoles.
Neither the performance profile on PC or XBSX even remotely support the PS5 running at 1080p which is 2.25x more pixels than the Series X.
Also that video lol... "PS5 is running at 1800p internal".
The Matrix UE5 demo on PS5/XSX was ~900p at (an unstable) 30fps.
So with this game being (an unstable) 60fps, 720p is about right when factoring in the resolution and frame rate in The Matrix demo.
Far more likely it's a simple sharpening difference between the consoles.
Neither the performance profile on PC or XBSX even remotely support the PS5 running at 1080p which is 2.25x more pixels than the Series X.
Also that video lol... "PS5 is running at 1800p internal".
Just.....no.To me the PS5/XSX difference is almost as noticeable as resolution difference between the SS and SX.
It's not just those though.The Matrix demo wasn't UE5.1 or 5.2 though, right? I believe Lumen had some major changes to support 60fps on console, and maybe nanite did as well.
Just.....no.
It's not just those though.
The game is using Niagara and also chucking a shit load of bandwidth sucking particles around the screen.
I'm not sure of the performance cost of Niagara?
And wasn't the 60fps mode using a reduced SDF for Lumen?
Do we know if Aveum is using SDF for Lumen?
High scalability level disables Detail Traces, and Lumen traces a single merged Global Distance Field instead of individual Mesh Distance Fields. Tracing the global distance field makes tracing independent from the number of instances and their overlap with other instances. It's also a great fit for 60 fps games and content with a large amount of overlapping instances.
Hello! Sorry absolutely right, it is sharper on PS5. I've updated the article here to reflect the point - it's a fair criticism, and my fault for leaning on the numbers too much.
The data is correct (both are 720p native in every shot tested), but the figures only really explain so much. It could be any number of factors: different post processing, AMD's CAS, or even VRS. I'm now asking the team what the reason is for the difference.
It’s 720p on both the high-end machines.
Updated my post.Great to see such a rapid response. Might be worth linking the source though.
About the async compute, this is the literally the last pass of a frame, maybe they can async compute with next frame's rasterization pass? Like shadowmap rendering and such (still your point of UE5 being compute heavy is valid, but for other engines this might work -- and UE5 itself still gets hardware rasterization as well.This quote from the DF article
That makes me think two things.
1. The older GPU's this works on might see a massively reduced uplift as they won't have the spare compute available to process the OFA data.
2. How much of an improvement are compute heavy engines like UE5 going to see even in big RDNA3 GPU's?
The OFA might become a bottleneck and limit how much of a boost FSR3 gives, and remember they also use compute for RT work too.
So AMD GPU's will have graphics, RT and now frame gen all fighting for those compute cycles.
At least the OF data on Nvidia is a dedicated hardware unit so shouldn't cause a bottleneck.