Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Why is that? Really curious to know.
Like the material editor, it exposes shader-like programming to content/artists who don't always have a good idea of the performance tradeoffs. Since it's effectively code-in-content and defined at a very fine-grain (per-pixel for light functions, vertices for stuff like world position offset), it's extremely difficult or sometimes impossible optimize later at an engine level, even when things like the pixel counts, light counts and vertex counts get increased massively and they are no longer appropriate domains to run such things at. Ex. vertex shaders and per-vertex painting are not a good idea with nanite-level detail meshes, but people are still used to the power afforded by the engine features.

In most cases the artists just want a way to define certain things in content rather than code, but as a side-effect they often use the most powerful tool even to accomplish simple things. Ex. as in the video, using a per-pixel light function to make a light flicker uniformly is inefficient, and can never really be optimized at an engine level because the content of a light function could be doing anything.

That said, expert artists who are experienced with the tools can use the power to do amazing things and largely avoid or adapt to the various pitfalls. As this just came up again in the DF thread, this is the tradeoff of exposing powerful features. While we programmers (and sometimes end users) can sigh that people are making things too heavy or inefficient, it is the power of the tools that allow the experts to do amazing things as well.
 
@Andrew Lauritzen quick Nanite question. Is the cost mostly flat or does it tend to scale linearly with the geometry count? If a dev switches to the Nanite pipeline later in development and doesn’t necessarily have high geometry assets with any level of consistency are they paying a similar performance cost as a developer who uses high geometry art more pervasively?
 
@Andrew Lauritzen quick Nanite question. Is the cost mostly flat or does it tend to scale linearly with the geometry count? If a dev switches to the Nanite pipeline later in development and doesn’t necessarily have high geometry assets with any level of consistency are they paying a similar performance cost as a developer who uses high geometry art more pervasively?
Not a simple question to answer unfortunately so I'll give some general info but don't latch on to too many specifics as there's a lot of variables.

Nanite itself tends to scale extremely well with scene complexity (both number of objects and complexity of objects), obviously far better than non-Nanite. Furthermore other systems that need fine-grained culling and LOD like Virtual Shadow Maps and Lumen benefit disproportionately from things being Nanite, and work less efficiently without it, which is why there is often good reasons to pair the new systems up; they have been designed to work well together.

Nanite does have some amount of fixed overhead (for having *anything* Nanite in the scene) to run the culling pipeline and such, but it is not necessarily huge. Base pass materials are even more complicated to discuss, as Nanite uses a visibility buffer the tradeoffs are a bit different there as well. Stay tuned on that front for 5.4 as well as there are a number of changes and improvements to the compute materials pipeline.

The biggest gotchas then tend to be around stuff like older systems that want to do things like render a bunch of different passes into separate shadow atlases do *not* work well with Nanite. So similar to not wanting to use VSMs without Nanite, there are potential cliffs around using Nanite with classic shadow maps. This landscape can be navigated (see Satisfactory for instance which is doing a good job of using Nanite in some places for big benefits but not necessarily converting everything over, while still avoiding many of the pitfall areas), but it requires folks to really understand how the engine systems work together.

Here's some high level guidance we give folks though on this front:

1) Once you are using any Nanite, you should generally convert all your opaque geometry to Nanite. There's not much benefit to mix-and-match.

2) Adding triangles to Nanite meshes doesn't cost very much at runtime, so use as many as you need. The main cost is more in terms of disk space (although it is well compressed) and other CAD tools not being able to handle giant meshes well.

3) While programmable raster has been supported for a while now in Nanite, this is definitely one area where people can get themselves into trouble. Alpha test is very expensive with Nanite, so geometry should be preferred wherever possible (see the fully geometric trees in Fortnite, etc). Similarly things like high density meshes together with vertex-shader-type stuff (world position offset) should be used sparingly... if WPO must be used the shader should be as simple as possible. Both of these considerations are also true for raytracing for similar reasons.

Went a bit beyond your question but wanted to give some context on the sorts of considerations that come into play.
 
Not a simple question to answer unfortunately so I'll give some general info but don't latch on to too many specifics as there's a lot of variables.

Nanite itself tends to scale extremely well with scene complexity (both number of objects and complexity of objects), obviously far better than non-Nanite. Furthermore other systems that need fine-grained culling and LOD like Virtual Shadow Maps and Lumen benefit disproportionately from things being Nanite, and work less efficiently without it, which is why there is often good reasons to pair the new systems up; they have been designed to work well together.

Nanite does have some amount of fixed overhead (for having *anything* Nanite in the scene) to run the culling pipeline and such, but it is not necessarily huge. Base pass materials are even more complicated to discuss, as Nanite uses a visibility buffer the tradeoffs are a bit different there as well. Stay tuned on that front for 5.4 as well as there are a number of changes and improvements to the compute materials pipeline.

The biggest gotchas then tend to be around stuff like older systems that want to do things like render a bunch of different passes into separate shadow atlases do *not* work well with Nanite. So similar to not wanting to use VSMs without Nanite, there are potential cliffs around using Nanite with classic shadow maps. This landscape can be navigated (see Satisfactory for instance which is doing a good job of using Nanite in some places for big benefits but not necessarily converting everything over, while still avoiding many of the pitfall areas), but it requires folks to really understand how the engine systems work together.

Here's some high level guidance we give folks though on this front:

1) Once you are using any Nanite, you should generally convert all your opaque geometry to Nanite. There's not much benefit to mix-and-match.

2) Adding triangles to Nanite meshes doesn't cost very much at runtime, so use as many as you need. The main cost is more in terms of disk space (although it is well compressed) and other CAD tools not being able to handle giant meshes well.

3) While programmable raster has been supported for a while now in Nanite, this is definitely one area where people can get themselves into trouble. Alpha test is very expensive with Nanite, so geometry should be preferred wherever possible (see the fully geometric trees in Fortnite, etc). Similarly things like high density meshes together with vertex-shader-type stuff (world position offset) should be used sparingly... if WPO must be used the shader should be as simple as possible. Both of these considerations are also true for raytracing for similar reasons.

Went a bit beyond your question but wanted to give some context on the sorts of considerations that come into play.
For the existing UE5 games, most of which started development on UE4 AFAIK, were these guidelines generally followed? I'm curious if an abnormal amount of performance was left on the table due to atypical oversights with such differing rendering paths.
 
3) While programmable raster has been supported for a while now in Nanite, this is definitely one area where people can get themselves into trouble. Alpha test is very expensive with Nanite, so geometry should be preferred wherever possible (see the fully geometric trees in Fortnite, etc). Similarly things like high density meshes together with vertex-shader-type stuff (world position offset) should be used sparingly... if WPO must be used the shader should be as simple as possible. Both of these considerations are also true for raytracing for similar reasons.

Went a bit beyond your question but wanted to give some context on the sorts of considerations that come into play.

There's an entire GDC talk on trying to get nanite to work with reactive foliage: https://schedule.gdconf.com/session...-interaction-for-ark-survival-ascended/902219

That being said, I'm equally interested in Alan Wake 2's foliage animation. Foliage has always animated like absolute junk, bringing in mass vertex animation made it acceptable for wind, but only slightly less junk as far as player interaction goes. But mass gpu skinned meshes sounds really cool, and the results were much better: https://schedule.gdconf.com/session...skinning-for-vegetation-in-alan-wake-2/899345

Edit - Also, I kinda really need that third in the series open world Zelda to be made in UE5 now. Like I, really, really, really need this:

 
Last edited:
Only had a chance to play one game of fortnite's new season and it felt very smooth. Not sure if that's because I've been playing helldivers 2, which has a much rougher performance profile. Map looks great. There's a giant statue at Olympus and that whole area looks quite nice. I’ll try to capture some screens.
 
Only had a chance to play one game of fortnite's new season and it felt very smooth. Not sure if that's because I've been playing helldivers 2, which has a much rougher performance profile. Map looks great. There's a giant statue at Olympus and that whole area looks quite nice. I’ll try to capture some screens.
So I just finished trying this out as well. Played a few matches, and I'm super impressed! This is after a completely fresh format.. so no chances of any shader caches, and it was night and day better than what it was previously regarding stuttering. I'm honestly really impressed. They're actually getting it done. The matches I played were locked 60fps, with basically everything maxed out. Looked and ran gorgeous on the first run.
 
Very hard to compare performance across seasons. I set everything to epic except VSM, lumen which I set to high and turned ray tracing on. I seemed to be between 78-105 (usually 90) fps with DLSS Quality at 1440p on an rtx 3080 10GB. To me, the game feels smoother, so maybe some frame pacing changes? I like to play with really high frame rates, so I'll generally play at 1440p dlss quality with nanite on, but most other things low. I think effects medium, textures epic, draw distance epic, but everything else low ... I think. In that case I get 200-225 fps (reflex is capping at 225 with my 240Hz display). It feels smoother to me than it did before. Again, could be frame pacing, or it could be having spent a lot of time on Helldivers 2 the past week, and in that game I really can't seem to break 110 fps without setting the scaling to something awful looking, but I'll end up being cpu limited and it still doesn't get any higher.

I don't have great data for the previous season to do comparisons, but it could be worth looking into. I'm hoping in their state of unreal when they talk about UE 5.4 they'll talk a bit about fortnite, or maybe they'll have another GDC talk about that.
 
For the existing UE5 games, most of which started development on UE4 AFAIK, were these guidelines generally followed? I'm curious if an abnormal amount of performance was left on the table due to atypical oversights with such differing rendering paths.
Honestly, I don't necessarily hear back a lot of details from licensees on how things went. Judging by just the public info around some of the current releases there's obviously places where compromises were made due to not having time to build things from scratch, or change how all the lighting was done late in the game, or so on. I don't think we've seen an optimal "all-in" kind of case yet from that point of view. On the other hand, I mentioned Satisfactory as an example of a game that has managed to implement things more tactically and iteratively, but in a way that has really improved things across a pretty broad range of hardware.

As always I look forward to seeing what people come up with next, particularly once we start seeing the big AAA games that are able to put the time into polish more and that have been using the UE5 tech from the start.
 
Last edited:
I wonder how much Fortnite improved between its first UE5 release and now, it looks super smooth and cleaner now than i remember when it released.
And switching between 60-120fps mode, the differences are bigger than some "remakes" released these past years.
 

Excitement is building for #GDC2024! Our friends at Epic Games will be presenting Nanite’s GPU-driven pipeline on Wednesday morning, which will include early observations from a D3D12 Work Graphs experiment. Don't miss their session on
@UnrealEngine
Nanite GPU-Driven Materials!

Looks like UE5 already has worked on using work graphs with nanite. Should be a fun presentation.
 
I love that this kinda came out of nowhere with minimal hype. It could be such a game changer for more efficient hardware utilization. Good on Epic for pushing the envelope.
 
Black Myth: Wukong gets the full nVidia UE5 branch treatment:
Enabling Full Ray Tracing in Black Myth: Wukong sees environmental effects and detail taken to the next level. Reflections on water reflect all surrounding detail. Water caustics add further realism, accurately rendering the refraction and reflection of light. Fully ray-traced Global Illumination ensures lighting indoors and outdoors is pixel perfect, darkening areas where light is occluded or doesn’t reach, and realistically illuminating the world by bouncing light. And in concert with the lighting system, contact hardening and softening fully ray traced shadows are cast everywhere, rendering the smallest of shadows from leaves and pebbles, and those from geometry-rich buildings, the main character, and the gigantic bosses that must be overcome.
 
Epic Games confirms the price increase for those who use Unreal Engine without breaking its promise (they promised not to increase the price of UE for videogames), the video game industry is "ironclad".

Epic Games wants to keep a good chunk of film production, series, etc, and gain ground on Unity in the video game industry.

 
Need For Speed Most Wanted Remake Unreal Engine 5 Rockport City:

-Created new particle system in niagara for backfire and NOS FX
-Fake interior shader
-Made my own custom traffic system (WIP)-Using upscaled xbox 360 road textures
-Added 3D trees with wind effect instead of the tree cards and grass that NFSMW had
-Falling leaves FX
-Improved the sound of the TVR Cerbera that OG NFSMW is using for the BMW M3 GTR-Brake disks heating up when breaking at high speed
-Colour grading-Original BMW M3 GTR Model with slight improvements
 
Back
Top