Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

"Many-Light Rendering Using ReSTIR-Sampled Shadow Maps (to appear in Eurographics 2025)" đź‘€


Pretty sure it uses Nanite, no lod pop means nanite is a real good bet.

From the DF article on the game

The game is also one of the most polished releases we've experienced on console for quite some time, which is surprising as it's an Unreal Engine 5 release - which allows for some striking visual features, but also typically opens the door to some familiar issues, vis a vis compromised image quality and traversal and/or shader compilation stutter.

The reason is perhaps that the game doesn't look to use any of the headline UE5 features, such as Lumen global illumination, Nanite geometry, MetaHuman NPCs or virtual shadow maps (VSMs).
 
Last edited:
A game called Brickadia has integrated PhysX 5 in UE5.1 engine, instead of Chaos, citing huge performance uplifts with PhysX 5 over Chaos in large simulations.
Another developer shared similar experience, the developer of Kingsmaker (a physics heavy game) cited they didn't switch to UE5 for their game, and instead relied on UE4, because nothing was faster than PhysX on the CPU (timestamped).

 
https://github.com/EpicGames/UnrealEngine/commit/41bd224e05888559bb154c76327a2214cf0fd9b4 (need to have a github account that is granted access to the repository, else you get a 404 ) looks like Epic is developing a plugin that converts static actors that only represent geometry to a non-actor type. Also implements streaming for them off the game thread. The objects not being an UObject anymore will also safe some memory and garbage collection time. Some developers like CDPR already rolled their own but now smaller developers get to have this type of a system. I like that they made it a separate plugin as it gives studios a reference implementation to alter for their use case (for instance CDPR said their system makes a distinction of objects inside and outside buildings). This won't solve the stutters you get from spawning actors but it might free up some game thread time to give games some headroom to make those stutters less severe.
 
This post is edited copy from other thread. I think this can be in UE5 thread, because most games from this post are on UE5.
Some time ago I completed Silet Hill 2 Remake and Stellar Blade. Both games are very good. But in terms of graphics they look like with some minor changes they coud've bee released for PS4. Don't get me wrong, both games look very good, but I cant say this is PS5 level of graphics. (Yes, I played them on PS5 in 30 fps mode). Before I completed Alan Wake 2 also on PS5. In AW 2 grpahics is true next gen. I even looked at screenshots I made on PS5 in Quarry. Characters and enviroment looks better in that game than in SH 2. And that is PS4 game, ok I played that on PS5 but I watched comparison between PS4 and PS5 versions, and they look almost the same. So I really don't understand why those games weren't released for PS4.
Another moment, I remember some people disagreed with me before and will disagree now, but I still think what for now top 3 games with graphics on consoles are
Indiana Jones, Stalker 2 and Hellblade 2. Last time I said that I haven't played AW 2 yet, but now, after I finished it I still think the same. Main difference is what there is a less geometry detailes in AW 2 comparing to 3 other games. Could that be because PS5 don't have mesh shaders? What difference do you think will be in PS5 version of Indiana Jones?
 
Main thing I wanted to know is mesh shaders give Xbox consoles some real advanteges? Or will in the future?

Mesh shaders are a DX12 implementation of features that AMD first introduces under the name "Next-Gen-Geometry" years earlier. Different GPU's archs might have more or less comprehensive feature sets of what is now called mesh shaders, but most of it is in the PS5, despite them not using that nomenclature.
 
Do you have data which suggests otherwise?

This is highly likely to be driver and "desktop" independent. What specific use case are you envisioning in which amplification shaders becomes a performance advantage?
 
From independent testing, there's no real advantage of official support for amplifications shaders. If you want maximum performance, you're better off just using ExecuteIndirect (for indirect draws) with a compute shader pre-pass ...
I don’t believe this is true in every situation. If every engine was the same, and all hardware the same perhaps this would hold more value. The reality is that with games you still need to look at the systems working with each other, synthetic benchmarks are great, but indirect draws with a compute shader prepass doesn’t bring you down the 3D pipeline like how mesh shaders do.

If your engine is heavily reliant on 3D, mesh shaders appears to be a great slot in.
 
Main thing I wanted to know is mesh shaders give Xbox consoles some real advanteges? Or will in the future?
I think for certain games like Doom, the ID Tech engine is well optimized for Series consoles. PS5 does have a mesh shaders equivalent, imo, it should be close to no difference on that front. Amplification shaders may prove to bring additional performance, but we’ve not seen a game ship with it yet.
 
I don’t believe this is true in every situation. If every engine was the same, and all hardware the same perhaps this would hold more value. The reality is that with games you still need to look at the systems working with each other, synthetic benchmarks are great, but indirect draws with a compute shader prepass doesn’t bring you down the 3D pipeline like how mesh shaders do.

If your engine is heavily reliant on 3D, mesh shaders appears to be a great slot in.
I don't think the value of mesh or primitive shaders is up for debate anymore since we're seeing some genuine use cases for them but there's definitely some room for skepticism for the other parts of the mesh shading API itself like amplification shaders because it is somewhat doubtful that AMD HW sees any performance advantages when it's implemented as compute shaders under the hood so there's not a whole lot of "special hardware" going on to accelerate the functionality ...
 
Do you have data which suggests otherwise?

This is highly likely to be driver and "desktop" independent. What specific use case are you envisioning in which amplification shaders becomes a performance advantage?
Implementation of a shared API feature is by definition driver and platform dependent. That's like... what a shared api is, lol. The linked blog post is mostly about the differences in the amd and nvidia implementaitons he tested!

(I'm not claiming it's faster -- don't see why it would be -- but I also don't see why it would be in principle slower (which it clearly was for the cases tested by the poster))
 
Back
Top