Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Developers can complain about hardware rasterization and that it doesn't meet their needs or they can do something about it. Those like Sebbbi and Graham indirectly drove the development of the Mesh Shader API by using compute for GPU driven rendering. The other way to go about it is to create an ultra quality mode with an abundance of geometry. One problem from a hardware design point of view is justifying the area, but with most games there's more bank for the buck to devote area somewhere else.

I'm not sure why people are concerned with the rasterizer efficiency. So what if some of the pixel pushing horsepower isn't used. I understand if the PS efficiency is an issue, but that's an API problem because quads are required for derivatives.
 
Those like Sebbbi and Graham indirectly drove the development of the Mesh Shader API by using compute for GPU driven rendering. The other way to go about it is to create an ultra quality mode with an abundance of geometry.

Is 'the other way' nanite? It's also a computer based gpu driven renderer, just taking a different approach with different strengths and weaknesses. And it almost certainly uses mesh shaders for some significant portion of the path on platforms that support it. Not sure what the point you're making is.
 
I don't think it does at the moment? But they are looking into it?

Mesh shaders were mentioned in the livestream, but I don't remember for sure when or what context. I'd speculate they use every trick to get it running on each platform though.

Edit: can't find it, maybe I'm misremembering.
 
Last edited:
My argument is that we prioritize the areas I mentioned first, crank up geometry to 11 on characters, cars, heroes, and main objects as much as you want, these are things that occupy the screen 100% of the time and are always noticeable.
Seen the video where Karis showed a bicycle model beside those statues? Every tiny mechanic of the chain is detailed geometry. So cars etc. already works, and i don't think skinned meshes will be a problem. The only problem i see is to generate all content at insane detail level. Photogrammetry won't work for cars, getting manufacturer CAD models isn't always an option, so manual modeling becomes generally more work. Though, nobody forces us to use insane details. I'm happy with less as long as it's consistent.

For characters, i see another problem: Even if we have insane detail, our skinning methods are terrible, so the uncanny valley effect increases.
 
Is 'the other way' nanite? It's also a computer based gpu driven renderer, just taking a different approach with different strengths and weaknesses. And it almost certainly uses mesh shaders for some significant portion of the path on platforms that support it. Not sure what the point you're making is.
No, the other way would have been to overload the rasterization path with geometry so hardware vendors would have been forced to increase geometry performance or lose benchmarks. Nanite may be a disincentive to improve hardware rasterization performance for small triangles.

Nanite is a GPU driven renderer, but it doesn't fit in the Amplification/Mesh Shader style. They will use Mesh Shaders if they don't already, but I'm betting Amplification Shaders aren't a fit for them.

Mesh Shaders are a different way of doing things and can increase efficiency, but they don't do anything to increase the triangle rasterization rate. If developers want that to be increased the best way is to show off content where that is a significant bottleneck.
 
What Nanite is doing is no different to tessellating the ground, the tree barks, the terrain .. etc, it's advantage is mainly the continuous LOD, that's it.
I know some artists who would be mad at you for the implication that "just continuous LOD" is not a big deal :p I really think you should go play around in the UE5 editor a bit even with low poly geometry before you make this claim so boldly... the difference between Nanite and non-Nanite in terms of performance and visual quality (especially when you consider VSMs, which really want Nanite for performance) is massive. It's worth remembering that in any given scene even with "regular" game geometry many objects will be rendered at lower LODs due to distance, and with conventional systems there is also incentive to pop the LODs before we would really want to if visual quality was the only concern.

While the demos are going for "wow factor" with Quixel scans, as discussed in the live stream, it's obviously up to individual games to decide where to apply the detail. The fact that you now have a choice - and in many cases it is a huge amount cheaper than it used to be - is a big deal IMO.

I also highly suspect given my experience this is a bit like most new features/updates; it's most obvious in retrospect once you've gotten used to the new level.

So IMO, this fades in comparison to the need for a fully dynamic GI system in each game, which takes care of both lighting and shadows simultaneously to deliver lifelike visuals.
Right but Nanite ties in to how we are doing realistic light and shadows too.
 
Last edited:
Nanite may be a disincentive to improve hardware rasterization performance for small triangles.
It will be interesting to see where hardware raster goes with this but one thing has been very clear to me since the Larrabee days: you want two paths for small and large triangles. Nanite does this by throwing the larger triangles to the HW rasterizer (with prim shaders where possible) and the smaller ones to a software rasterizer. The size of triangle where this break happens is a dynamic CVar... no need to speculate, go play with it on your hardware of choice.

But yeah if hardware wants to "re-take" all of this ground I doubt it will make sense to try and force all of the tiny triangles down the current prim setup paths... it's just a huge waste when you're only going to draw one pixel anyways, as is stamp-style rasterization. Suffice it to say though if HW rasterizers magically start handling dense geometry well, I'm pretty sure no one will be happier than Graham, Brian, etc :) No one *wants* to be maintaining these tricky paths, but waiting literal decades for IHVs to do something about this has not produced any results thus far. Maybe this will be the kick in the pants they need. That said, if they have to add a pile of area that would otherwise be used for more compute/memory stuff to make this happen, that may be a bad trade-off overall too.

The graphics APIs do tie their hands slightly around things like derivatives, but that could obviously change as well. I've been joking that we're nearing the point where the APIs could remove pixel shaders (and instead just write out a visibility buffer) and we'd be okay with it :) It's an overstatement as there are still obviously various forward/blending cases that remain but for the vast majority of the frame we don't need the "typical" graphics pipeline anymore and I don't really see that part changing a ton. The advantages in using better data structures and not doing everything super brute force are just too large and growing.
 
Personally, it wasn't until I saw people messing around with the recent UE5 early access release that I started to really understand how impressive and game changing it was. While the previous reveal and announcement looked nice, it was still just a demo along a pre-scripted corridor as we've seen multiple times from Epic WRT UE.

However, seeing how quickly various people have come up with their own demo's in UE5 and seeing it in immediate action with free roaming cameras really shows just how potentially game changing this is.

That world geometry might finally have the fidelity to not look like it's made of triangles. More importantly an artist's need to attempt to use the largest and fewest triangles possible for geometry in order to free up enough resources for high poly character models and other important assets may not be something that is needed with Nanite.

What, you mean there may come a time soon-ish where I won't turn my camera in a game and come face to face with a low poly asset or area in the geometry which an artist had to use in order to free up resources for other more visible area's of the scene? Oh my goodness that would be fantastic.

While the lighting may not be as accurate as RT in as many situations (or in all cases), the way it interacts with all that fine geometry detail is so far ahead of what I've seen with RT til now. Don't get me wrong, current RT implementations are great to get the ball rolling, but it's still not "there" yet. Lumen obviously isn't "there" yet either, but the fine detail of the world geometry gets rid of some of the glaring inconsistencies with RT that's seen in conventional games til now. For example, shadows of those low poly world assets (that are used to allow for higher poly assets elsewhere in the scene) leading to unnatural looking shadows.

This has certainly gotten me at least as excited, or more so, than RT. We're finally starting to see some of things that I was hoping tesselation hardware would bring, but never materialized due to the cost overhead and complexity of using hardware tesselation for many things. I still remember discussing that with people on the forum that are no longer here, like Laa_Yosh and how he'd patiently try to explain why hardware tesselation was so difficult to use in 3D rendering.

Regards,
SB
 
A controversial tweet by the ex Call of Duty director of R&D graphics teams now at ROBLOX.

I would think like this: It is good that different industries tackled different methods for achieving higher image quality. So, there are researched methods out there to improve one another and try those methods in reverse conditions on either render farms or real-time. This R&D is always a net gain in the end even if that doesn't work for intended purposes, now real-time rendering going with hybrid software rasterizers on top of achieved RT advancements will look even better. Or it will at least provide alternative options to choose the approach for the game engines based on the project.
 
Last edited by a moderator:
E3I8ikkXwAAPHW1


 
Personally, it wasn't until I saw people messing around with the recent UE5 early access release that I started to really understand how impressive and game changing it was. While the previous reveal and announcement looked nice, it was still just a demo along a pre-scripted corridor as we've seen multiple times from Epic WRT UE.

However, seeing how quickly various people have come up with their own demo's in UE5 and seeing it in immediate action with free roaming cameras really shows just how potentially game changing this is.

That world geometry might finally have the fidelity to not look like it's made of triangles. More importantly an artist's need to attempt to use the largest and fewest triangles possible for geometry in order to free up enough resources for high poly character models and other important assets may not be something that is needed with Nanite.

What, you mean there may come a time soon-ish where I won't turn my camera in a game and come face to face with a low poly asset or area in the geometry which an artist had to use in order to free up resources for other more visible area's of the scene? Oh my goodness that would be fantastic.

While the lighting may not be as accurate as RT in as many situations (or in all cases), the way it interacts with all that fine geometry detail is so far ahead of what I've seen with RT til now. Don't get me wrong, current RT implementations are great to get the ball rolling, but it's still not "there" yet. Lumen obviously isn't "there" yet either, but the fine detail of the world geometry gets rid of some of the glaring inconsistencies with RT that's seen in conventional games til now. For example, shadows of those low poly world assets (that are used to allow for higher poly assets elsewhere in the scene) leading to unnatural looking shadows.

This has certainly gotten me at least as excited, or more so, than RT. We're finally starting to see some of things that I was hoping tesselation hardware would bring, but never materialized due to the cost overhead and complexity of using hardware tesselation for many things. I still remember discussing that with people on the forum that are no longer here, like Laa_Yosh and how he'd patiently try to explain why hardware tesselation was so difficult to use in 3D rendering.

Regards,
SB

But UE5 is using RT. Just not hardware / triangle based RT. I'd wager it wouldn't look nearly as impressive with traditional lighting models. I absolutely agree with you that what they're doing wrt geometry is both awesome and exciting. But it's the combination of that geometry with an accurate RT based lighting system and photorealistic quixel megascan textures that's making it look so amazing. I don't think framing it as 'denser geometry is more important than RT based lighting' is the right way of looking at it. I know I've mentioned it above but Metro Enhanced and Lego builder are great examples of how RT can make even a basic / last gen game look incredible without 1tri/pxl level of geometry. But we already have geometry density on the scale of Demons Souls remake, R&C: Rift Apart, Horizon FW etc... using non nanite based approaches. I'm sure from a development point of view nanite would help those games tremendously (and open the door to that level of graphics to less talented/well funded devs), but would it make that that huge a difference to the end visual result? In any case, for the best "next gen" visuals, you're going to need both. And that's what UE5 delivers. So everyone's a winner.
 
But UE5 is using RT. Just not hardware / triangle based RT.

UE5 uses both sw and hw RT. HW RT is the higher quality option over sw rt.

Hardware Ray Tracing supports a larger range of geometry types than Software Ray Tracing, in particular it supports tracing against skinned meshes. Hardware Ray Tracing also scales up better to higher qualities — it intersects against the actual triangles and has the option to evaluate lighting at the ray hit instead of the lower quality Surface Cache. However, Hardware Ray Tracing has significant scene setup cost and currently cannot scale to scenes with more than 100,000 instances. Dynamically deforming meshes, like skinned meshes, also incur a large cost to update the Ray Tracing acceleration structures each frame, proportional to the number of skinned triangles.

Limitations of Software Ray Tracing
Software Ray Tracing has some limitations relating to how you should work with it in your projects, and what types of geometry and materials it currently supports.

This is not an exhaustive list of known issues or limitations, but is largely representative of what you should expect while working with Lumen Software Ray Tracing in Unreal Engine 5 Early Access.

Geometry Limitations
  • Only Static Meshes, Instanced Static Meshes, and Hierarchical Instanced Static Meshes are represented in the Lumen Scene.

  • Landscape geometry is not currently represented in the Lumen Scene and therefore does not bounce lighting. This will be supported in a future release of the engine.
Material Limitations
  • World Position Offset (WPO) is not supported.

  • Transparent materials are ignored by Distance Fields and treat Masked materials as opaque. This can cause significant over-shadowing on foliage, which has large areas of leaves masked out.

  • Distance Fields are built off of the properties of the material assigned to the Static Mesh Asset, rather than the override component. Overriding with a material that has a different Blend Mode or Two-Sided property will cause a mismatch between the triangle representation and the Distance Field.
Workflow Limitations
  • Software Ray Tracing requires levels to be made out of modular pieces. Walls, floors, and ceilings should be separate meshes. Large meshes, like mountains, will have poor representations and may cause self-occlusion artifacts.

  • Walls should be no thinner than 10 centimeters (cm) to avoid light leaking.

  • Mesh Distance Field resolution is assigned based on the imported scale of the Static Mesh. A mesh that is imported very small and then scaled up on the component will not have sufficient Distance Field resolution. If you use significant scaling on placed instances in a level, use Distance Field Resolution Scale to compensate.

  • Distance Fields cannot represent extremely thin features, or one-sided meshes seen from behind. Avoid artifacts by ensuring the viewer doesn't see triangle back faces of one-sided meshes.

https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Lumen/TechOverview/
 
But UE5 is using RT. Just not hardware / triangle based RT. I'd wager it wouldn't look nearly as impressive with traditional lighting models. I absolutely agree with you that what they're doing wrt geometry is both awesome and exciting. But it's the combination of that geometry with an accurate RT based lighting system

I wouldn't call it accurate. Lumen is great but it's nowhere near close to the resolution or speed of RT gi system's we've seen already, and there are no rt shadows or reflections. As I said above the shadow system in ue5 is amazing but not close to the accuracy of good rt shadows. Not clear how much that'll matter in average scenes though.

but would it make that that huge a difference to the end visual result?

Yes. Those games are incredibly limited by polygon budgets, it's clear as day looking at them. I think if you guys saw wireframes side by side between those scenes and the various scenes we've seen cobbled together in ue5 you'd be shocked.
 
No LOD into RT too, DXR need to evolve.
To trace against Nanite meshes which Lumen doesn't do either. Tracing against proxies is possible right now if they'll optimize the geometry setup for BVH builders. Which they'll have to do if they want to use RT h/w in Lumen as a high quality option as they've stated.
 
But the RT path is inefficient because DXR is not flexible enough.

Epic and UE5 should be big enough player they can affect how future versions of DXR API and gpu hw evolve. It's anybody's guess what kind of 1:1 discussions epic has gone through with ms, amd, nvidia, intel,... and when those discussions could have started.

If there is some serious SW/API limitation I would expect either sony or microsoft to give epic low level hw access. Could be interesting if there turns out to be sw limitation on pc side. Banging straight to metal in consoles could produce something surprising.
 
Back
Top