Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

they’re also merging triangle geometry as a sort of coarse grained LOD to accelerate hardware RT. No idea if the merging technique would work for objects close to the camera though.
Thanks. Good to know. Maybe the far-field merging can be reused for the near-field objects as well, this would have eliminated the discrepancy in performance between the SW and HW implementations for the kitbashed geometry.
 
Yet, merging geometry instances sounds like a task that can be automated, and in fact it has already been accomplished for the distance fields volumes.
I understand that merging 3D volumes should be a way simpler task than merging polygons, but Epic already did the impossible with Nanite, so merging poligonal geometry doesn't sound unrealistic to me)
It can be automated, but all existing robust solutions involve converting geometry to a volumetric format for merging, and then back to geometry. This is inherently very lossy -- it aliases geometry into a grid, so even the densest volume (and these are uniform cube grids, so memory footprint/cpu cost to merge goes up extremely fast) is going to have stairstepping and inaccuracy along edges that don't follow the grid.

Getting a good result measn some work sorting that out, combining it with other less robust solutions like booleans, and picking and choosing what you merge. Easily within the ability of a studio that knows their content, but not something I'd like to see a game engine provide a general purpose solution for.
 
it aliases geometry into a grid, so even the densest volume (and these are uniform cube grids, so memory footprint/cpu cost to merge goes up extremely fast) is going to have stairstepping and inaccuracy along edges that don't follow the grid
Honestly, I don't think that even with all these conversions, it would be worse than a low-res merged distance field. I mean, even with such conversions, triangle geometry should prevent glaring light leaks due to low volume resolution, as can be easily seen in many places (like in the container in this otherwise quite realistic trailer for example). There are limitations with workflows in UE5, and one must keep these limitations in mind to avoid artifacts. Complete transitioning to triangle RT can alleviate some of these limitations and make the work easier.
 
Last edited:
Honestly, I don't think that even with all these conversions, it would be worse than a low-res merged distance field. I mean, even with such conversions, triangle geometry should prevent glaring light leaks due to low volume resolution, as can be easily seen in many places (like in the container in this otherwise quite realistic trailer for example). There are limitations with workflows in UE5, and one must keep these limitations in mind to avoid artifacts. Complete transitioning to triangle RT can alleviate some of these limitations and make the work easier.
I'm not so sure -- sdfs are great for getting an aggregate approximate of a bunch of very small things, especially compared to a similar amount of triangle data, which will be full of long stretch tris that don't capture area well. Sdfs also provide nice ways to approximate cone mapping. And they go down in detail basically infinitely, vs triangles which deteriorate heavily as the surface becomes harder to represent. Any technique, including triangle RT at any realistic number of triangles and rays, is going to require a lot of care, concessions, content choices, etc to ensure an artifact free image. I haven't comparatively evaluated everything out there or anything like that, but lumen seems like a pretty nice set of compromises to me.
 
The funny part is that there are tools in the UE5 editor that allow to merge geometry instances, though, I wasn't able to utilize them for HW RT acceleration purposes in the Valley demo:)
Yet, merging geometry instances sounds like a task that can be automated, and in fact it has already been accomplished for the distance fields volumes.
I understand that merging 3D volumes should be a way simpler task than merging polygons, but Epic already did the impossible with Nanite, so merging poligonal geometry doesn't sound unrealistic to me)
There's some simplification that can be done without changing the result in terms of eliminating stuff that is not ever going to be visible but there are some caveats, the first one of which is probably the most relevant:

1) Kit-bashing works because you are relying heavily on instancing and instance transforms both to create the geometry, but also to store it efficiently in memory. If you actually made all of these instances unique by merging them and removing invisible geometry it would be impractically large to store on disk and so on. A balance needs to be struck between optimizing these kit-bashed results without introducing too much overhead to the storage by unwinding what is effectively pretty efficient "compression" of the geometry based on how it was created.

2) You could analytically find intersections between the geometry and reweld everything along them into a single mesh, but fixing textures and UVs is non-trivial then (even if they happen to use the same material)... you effectively have to unwrap the mesh all over again which as we know can be partially but not fully automated. This is probably a deal-breaker in practice, so you'd probably still need to keep them as separate draws/bindings/materials. But of course you'd still lose the benefits of instancing because now the meshes are unique.

HLOD of course does these things, but at a very low quality level that would never be sufficient for anything up close to the camera (and arguably isn't even sufficient for distant primary rays).

I'm not so sure -- sdfs are great for getting an aggregate approximate of a bunch of very small things
Right, I think it's important for people to remember that triangle data is great for representing some things (especially large flat surfaces), but not great for other things. Offline rendering uses a mix of triangles, voxels/bricks, SDFs and other representations and I don't see that being any different in the near term for real-time. Thus we will continue to have to deal with and support multiple representations in the various rendering systems as well as possible.
 
Last edited:
sdfs are great for getting an aggregate approximate of a bunch of very small things, especially compared to a similar amount of triangle data, which will be full of long stretch tris that don't capture area well. Sdfs also provide nice ways to approximate cone mapping. And they go down in detail basically infinitely, vs triangles which deteriorate heavily as the surface becomes harder to represent. Any technique, including triangle RT at any realistic number of triangles and rays, is going to require a lot of care, concessions, content choices, etc to ensure an artifact free image.
I would not consider these to be significant advantages (if at all - they are all solved for triangles at this point), given the task these attempt to solve. The primary goal of triangle proxies or Signed Distance Functions (SDFs) is not to aggregate approximate of a bunch of very small things (which can be done via different compression schemes for triagles), but rather to provide a good preferably watertight approximation of the triangles in the main pass geometry, encompassing all types of geometry. It just so happens that SDFs are a very poor proxy of triangles because, as you mentioned earlier, volumes have finite resolution, which makes them inherently lossy when it comes to representing all the small or thin triangle geometry things in the main pass (and especially so given the low resolutions that are used in practice). Unfortunately, volumes also do not support all dynamic geometry types that triangles do, which makes things only worse.
 
Volumetric representation is a standard approach for aggressive simplification, as it merges any geometric details below voxel size. Which makes finite resolution a feature and not a limitation. This is not just a theory and you can spot volumetric representation artifacts in HLOD or Simplygon meshes.

Per mesh SDF or voxel representation enables runtime merging of geometry. No need to mark static geometry upfront and bake it. Memory usage is better as instead of storing data per every instance combination you only need to store data per mesh.

At the end of the day 4090, can bruteforce content designed for PS4, but things get complicated when Nanite level of geometric complexity and instancing has to be ray traced on a PS5.
 
Volumetric representation is a standard approach for aggressive simplification, as it merges any geometric details below voxel size. Which makes finite resolution a feature and not a limitation.
Gosh, have you tried UE at all? Are you sure there is nothing missing in the Lumen scene with SDFs - https://imgsli.com/MTc0ODM2 ? And that's not even the fine grain things we were talking about, that's a large 3D model missing whole body parts, what are you talking about? Sorry, I am just tired of these talks and don't want to spend more time on this.
 
Gosh, have you tried UE at all? Are you sure there is nothing missing in the Lumen scene with SDFs - https://imgsli.com/MTc0ODM2 ? And that's not even the fine grain things we were talking about, that's a large 3D model missing whole body parts, what are you talking about? Sorry, I am just tired of these talks and don't want to spend more time on this.
No, I mean, that looks pretty great to me. We live in a world of severely lossy approximations. Hard to imagine triangles doing that well for that cost. Obviously if you had budget to trace against a ton of tris tris are better than a super low res 3d texture sdf, but once again I'm not sure they're better than a double res sdf would be for this use case. It got basically all of the aggregate shapes and surfaces correct, should be more than sufficient for distant approximate lighting.

Hard to imagine constructing a scene where somehow one of those statue ears is absolutely required for correct occlusion and it's not covered by shadow maps on the main geometry.
 
Considering how many borked ue4 games we have gotten recently, I am chomping at the bit to see what ue5 plays like in a realistic scenario on console
 
Hard to imagine triangles doing that well for that cost.
Triangles do just fine for the cost in Crysis, WoT and the Neon city demo even with SW RT. The problem is that SW triangle RT is very limited in the same way as SDF RT. I am pretty sure that certain modern games with very thin RT implementations would do just fine with SW RT too if there was a knob to switch that on
 
Hard to imagine constructing a scene where somehow one of those statue ears is absolutely required for correct occlusion and it's not covered by shadow maps on the main geometry.
These ears are absolutely necessary for shadows, self-occlusion (specular, AO), mirror reflections, and so on. However, the ears are not an issue; the problem arises when light leaks through walls in a game because these walls are not thick enough so that low res SDFs would capture them. This issue primarily affects box-shaped geometry in games, which is typically quite common.
 
Triangles do just fine for the cost in Crysis, WoT and the Neon city demo even with SW RT. The problem is that SW triangle RT is very limited in the same way as SDF RT. I am pretty sure that certain modern games with very thin RT implementations would do just fine with SW RT too if there was a knob to switch that on
crytek gi is svogi last I knew, that's another volumetric format. I agree the low res sdfs are unsuited for direct reflections or anything close up -- that's not what they're for. Far away you won't see enough rays per frame to get accurate self occlusion on a surface that small anyway.

Reflection probes still exist in the game, and I expect lumen is pretty comparable to svogi, you could recreate the neon city demo pretty easily -- there are different pros and cons but these are broadly comparable techniques.
 
crytek gi is svogi last I knew, that's another volumetric format.
They obviously link triangles to the voxel octree for the triangle RT reflections that are used in the Crysis Remastered on consoles and in the Neon city demo.
For most people, it doesn't make sense to maintain dozens of systems for global illumination, reflections, and so on.
As a result, people will gradually abandon older voxel-based methods over time and will move towards unified solutions.
 
They obviously link triangles to the voxel octree for the triangle RT reflections that are used in the Crysis Remastered on consoles and in the Neon city demo.
For most people, it doesn't make sense to maintain dozens of systems for global illumination, reflections, and so on.
As a result, people will gradually abandon older voxel-based methods over time and will move towards unified solutions.
There's no magic linking, they're using multiple complimentary techniques -- just like ue5.
 
There's no magic linking, they're using multiple complimentary techniques -- just like ue5.
What is exactly magic about it? This is how it's done in multiple engines I know, there is no separate BHV for SW triangles RT in the cryengine.
"Our SVOGI (Total Illumination) system already contained what was necessary, so it was a relatively straightforward step to add the data required for ray tracing. In the current implementation, for every voxel, we store a reference to overlapping triangles, plus the usual information like albedo, opacity, and normal data."
 
What is exactly magic about it? This is how it's done in multiple engines I know, there is no separate BHV for SW triangles RT in the cryengine.
"Our SVOGI (Total Illumination) system already contained what was necessary, so it was a relatively straightforward step to add the data required for ray tracing. In the current implementation, for every voxel, we store a reference to overlapping triangles, plus the usual information like albedo, opacity, and normal data."
Right, so they have a coarse volumetric representation for lighting, and a strategy for doing quick indirect lookups of geometry. This still doesn't sound fundamentally different than ue5's two tiered approach between nanite visibility, direct lighting, and shadows and lumen for gi.
 
Right, so they have a coarse volumetric representation for lighting, and a strategy for doing quick indirect lookups of geometry
They did it because it was the simplest way to accomplish the task, given the tools they already had at their disposal. The voxel octree, however, is not known to be the best acceleration structure for triangle-based ray tracing. Consequently, this choice was likely made at the expense of triangle tracing performance. They could have used BVH AABBs with attached color attributes for GI purposes, and it might have resulted in a faster, unified SW solution. I guess we will never know.

But let's return to Lumen. Hardware tracing, with all the benefits of dynamic geometry, reduced light leaking issues, and the potential for using hit shaders on PCs for improved lighting and shading, works just as fast as software tracing in the Matrix demo on AMD hardware, which is not known for the best traversal speed. I can only imagine it being faster on other architectures. There is already performance parity on consoles, as well as better scaling and higher quality with hardware RT, so I don't see why someone would opt for a limited solution when a superior one is already available, except for corner cases such as kitbashing, laziness or lack of expertise, or just the fact that SW RT is used by default, which I personally find quite unfortunate (given the hardware requirements and release dates of the first UE5 games, I just don't understand why this is the case in 2023). What I am concerned about, as a PC graphics enthusiast, is that SW RT could drag graphics quality down in PC games and be used as an excuse for ports with minimal graphics improvements on PC.
 
Last edited:
Back
Top