Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Downloaded the video from vimeo and made a snapshot per frame of the debug view.
I see a lot of triangle clusters occasionally lodding up and down. It's not obvious what triggers the change - seems neither camera distance or curvature at incident angles.

I have not found adjacent clusters that do a transition. Only 'isolated' clusters, meaning a cluster that goes from lod 0 to lod 1 is surrounded only by lod 0, but never both 0 and 1.
This leads me to some speculation of a very simple system, assuming we would want to go from lod 0 to lod 2:
The whole model can transit between two lods, but not more. It becomes an entire different model if that would be necessary. So no hierarchy necessary at lower level of the tech.
After all clusters have changed from lod 0 to 1, all cluster boundaries have detail that matches still lod 0. So there might be a second phase of transition that changes the boundaries to lod 1 as well. (This second phase could be mixed with the first ofc.)
After everything is at lod 1, replace it with the version of the model that contains lod 1 and 2.

Some image showing two clusters (left yellowish / right blueish) and a common boundary path between them:

upload_2020-5-21_21-17-18.png
So for each cluster we get subclusters to each adjacent one (light and dark green). We can imagine the clusters as polygons (two quads) and the shared boundary path as an edge between 2 polygons.
The point is, each subcluster would have only 3 states:
All at lod 0.
Center at 0, edge at 1.
All at 1.

That's a lot simpler than a tree of edge collapses to have a progressive mesh, and it's still about efficient clusters not individual triangles of course.

Discrete LODs are already there, for both streaming and weak platfroms.

I think the idea would work for a real hierarchy to have more lods than just two for a huge model that needs this.

Splatting points could be still used additionally, e.g. clusters of points per vertex of the clusters.

(wanted to upload two screenshots but did not work, try again in a new post...)

1.png

2.png

Upload worked after cropping the images. Going through the video frame by frame shows the clusters much better.
 
Last edited by a moderator:
Yeah, that's what I have been thinking. Build and update a lower poly proxy of the scene, still using nanite, for the RT rays. No culling for this one. Maybe some less aggressive LODing based solely on distance to camera's centerpoint may stil be a win, but this can be relaxed in the name of updating the proxy less often.
Hmm. I was thinking create these offline where you could optimise their appearance. I guess if Nanite LOD goes low enough, you could bring in a 'low poly' model and bake it on the fly, but that depends on how graciously Nanite decomposes geometry. Can it maintain symmetry and pattern? Hand crafted models might be a lot more reliable. Or maybe machine-learnt creations based on artist analysis, if we're trying to save time!
 
the problem here is how do you trace against triangles when half the triangles of you mesh aren't present in RAM?
You have the full scene for RT, but at lower LOD.
Only problem is if you emit rays from screenspace, so they could be behind the lower LOD geometry. Thus displace the ray start a bit from the higher LOD position we get from screenspace.
If we want RT for the whole scene, not just around the camera, RT need support for LOD as well, and my proposal from the other thread could solve the problem of discontinuities without a need to have dynamic geometry.

To me it seems this tech is a way to create a close to h/w version of audio, GI and shadows via software instead of using the h/w RT thereby freeing up resources. This should free up the h/w RT to work on reflections...but maybe I got the wrong end of the stick?
Yes. That's more realistic than the path tracing example i have made. Lumen can not give sharp reflections, RT can. RT is also necessary for accurate area lights. If powerful enough, it could be used for the entire direct lighting.
Though, combining RT and compute lighting does not make any of them for free. But i have no doubts we can always beat an 'RT only' solution.

EDIT: thread grows too quickly, so often things have been said already...
 
Yeah, that's what I have been thinking. Build and update a lower poly proxy of the scene, still using nanite, for the RT rays. No culling for this one. Maybe some less agressive LODing based solely on distance to camera's centerpoint may stil be a win, but this can be relaxed in the name of updating the proxy less often.
I wonder, if on console access to BVH is possible and not black boxed like speculated, this would be possible by modification of BVH, like adding / deleting leaf nodes and replace fine grained pieces of geometry this way.
On PC it's more restricted and difficult. It's not yet possible to store BVH offline and stream it with discrete LODs - they need to be rebuild on each load. Building bottom level AS is probably too expensive for this to be practical, and in any case it's a waste.
 
Here's a higher quality version taken from the downloaded 4K YouTube version (frame 4408):

o6kukLd.png
 
From what we suspect of the AMD RT-RT solution in the patents of modifying the CU's.
Would those modifications help or be able to be leveraged in the way lighting is believed in U5 to be done?
 
Here's a higher quality version taken from the downloaded 4K YouTube version (frame 4408):
The changes are quite abrupt, but TAA masks them completely i guess. Neat :)

Now i wonder, why 'insane' detail if 'small' triangles would be good enough?
Either the goal is to impress, while expecting in practice the games will have less insane details.
Or splatting points is faster than drawing small triangles.

I'm not sure the triangles are HW rasterized. If they draw it front to back in small tiles with compute, they would have hidden surface removal, similar to beam tracing or Unlimited Detail.
With so many intersecting small models this would avoid a lot of overdraw and culling work, but no idea what's faster.
 
Last edited:
From what we suspect of the AMD RT-RT solution in the patents of modifying the CU's.
Would those modifications help or be able to be leveraged in the way lighting is believed in U5 to be done?
It would certainly help to be compatible with the geometry, because AMDs patent would allow traversal shaders and thus allow stochastic continuous LOD.
And if that's possible, they could replace the screenspace refinement with proper world space intersections and so get rid of temporal artifacts which are quite bad. It could also replace the virtual shadow map tech, if this makes sense, or the SDF stuff for some models to add detail to GI.

The character seems not contributing to GI. RT could help with this in any case.
 
Hmm. I was thinking create these offline where you could optimise their appearance. I guess if Nanite LOD goes low enough, you could bring in a 'low poly' model and bake it on the fly, but that depends on how graciously Nanite decomposes geometry. Can it maintain symmetry and pattern? Hand crafted models might be a lot more reliable. Or maybe machine-learnt creations based on artist analysis, if we're trying to save time!

I think their LODing must probably go prety darn low if they are planning on doing the large draw distances they teased in that last stretch with any resemblance of sane performance. They may not look as good as a hand tuned LOD, but may be good enough for reflections (especially if they come in as a plan B after Screen Space)

I think GTAV is a good case study of how simplistic reflection proxies can be and still look good. It renders a cubemap in realtime leveraging their lowest LODs of their environment and they hold up pretty ok. No shaders nor shadows either.

On one of those frame breakdown blog posts on-line over GTAV they show what the reflections version of the scene looks like, and it's up there on n64's kind of fidelity.
 
And... how much models are there in the demo? 100? And the whole scene is composed from that?
If so, RT would just work without any need for dynamic LOD at all? Instancing is the solution to all problems on earth?
 
I guess the immediate interpretation of the description of using the source geometry and not having to create game-level content makes us assume the same raw vertex info in storage just being streamed, but I guess that's not the case. Alternatively, that is the case, and Epic are somehow fetching the necessary triangles from this raw data. that seems unlikely.

I got the impression that this really is the case.

https://www.pcgamer.com/unreal-engine-5-tech-demo/

"With UE5, developers will no longer have to worry about polygon counts, says Epic. They can import 3D assets made of hundreds of millions or even billions of polygons and the engine will handle the rest, streaming that ultra-complex geometry at the maximum level of detail possible"

"Aside from looking cool, the hope is that UE5's capabilities will make cross-platform development easier for small developers, who will only have to bring in one set of high quality assets. The engine handles whatever complexity scaling is needed, all the way down to phones"

Anyhow I am concerned about game size with this approach and I look forward to se how they intend to solve that. I somehow tried to ask a question about this earlier, guess it got buried:smile2:

Would it be possible do use this type of engine to crunch down the high detail assets (from Z brush etc.) to the maximum in game detail assets and use those on the final game disc?
 
Anyhow I am concerned about game size with this approach and I look forward to se how they intend to solve that. I somehow tried to ask a question about this earlier, guess it got buried

Well, devices with less compression power might end up with reduced or uncompressed installs then. Current gen and lower end pc's are in for a treat, might aswell raid them SSD's if you need multiple of them anyway :p
 
You have the full scene for RT, but at lower LOD.
Only problem is if you emit rays from screenspace, so they could be behind the lower LOD geometry. Thus displace the ray start a bit from the higher LOD position we get from screenspace.
If we want RT for the whole scene, not just around the camera, RT need support for LOD as well, and my proposal from the other thread could solve the problem of discontinuities without a need to have dynamic geometry.


Yes. That's more realistic than the path tracing example i have made. Lumen can not give sharp reflections, RT can. RT is also necessary for accurate area lights. If powerful enough, it could be used for the entire direct lighting.
Though, combining RT and compute lighting does not make any of them for free. But i have no doubts we can always beat an 'RT only' solution.

EDIT: thread grows too quickly, so often things have been said already...

Tracing a lower lod than the primary view doesn't work for raytraced shadows, you're going to get obvious light leak close in.

As for lod popping, I suspect you can also see the virtualized texture popping as well. Examination of ssd latency shows that even if you optimize the driver stack latency is still probably several frames even at 30fps. Which is exactly why nanite and virtualized texturing can't do tracing. Tracing relies on extremely low latency, you might have one frame to nail that reflection before it shifts dramatically.

Nanites auto lodgen is very cool. But I suspect the tech demos setup is only useable for a rather limited set of potential game titles. And after looking over Hammer editor 2 from Source 2, I'm kind of more excited by that than UE5. The polish and usability of those tools is astounding, the ability to build and iterate a scene is excellent, above UE in a lot of ways.

Frankly it's more interesting to see that, and it's ability to make a fun game quickly, than it is to see flashy tech demos.
 
Nanites auto lodgen is very cool. But I suspect the tech demos setup is only useable for a rather limited set of potential game titles. And after looking over Hammer editor 2 from Source 2, I'm kind of more excited by that than UE5. The polish and usability of those tools is astounding, the ability to build and iterate a scene is excellent, above UE in a lot of ways.

Frankly it's more interesting to see that, and it's ability to make a fun game quickly, than it is to see flashy tech demos.

Hammer 2 is incredible in terms of UX and level creation compared to UE & Unity (which is IMO still better than UE in this domain).

Lumen is also going to get a wider and faster adoption than Nanite in the foreseeable futur given that it can easily fit into existing pipelines. (Crytek got it right all along when it come to lighting, Epic knew than when they tried to shoe horn SVOGI into UE4).
Epic do love their fancy tech demos though but the reality is that there are barely any UE4 games that sit on top of the AAA games list this generation in terms of IQ/FX/performance besides Gears 5 which is based on a heavily modified 4 years old build of the engine (4.11) with additional spot updates that added new features (which are way less intrusive than what Nanite is going to be).
Custom in-house engines still have a future.
 
Last edited:
And after looking over Hammer editor 2 from Source 2, I'm kind of more excited by that than UE5. The polish and usability of those tools is astounding, the ability to build and iterate a scene is excellent, above UE in a lot of ways.

Frankly it's more interesting to see that, and it's ability to make a fun game quickly, than it is to see flashy tech demos.

What are the promising features in there?
 
What are the promising features in there?

There was a whole twitter thread of a level designer going through it all with gifs, but I can't find it. But there's a ton of niceties, like really obvious and easy to use game unit measurements everywhere and a lot more. There's an excellent geometry mode with a ton of UV options that allows you to set it up really quickly. There's a tiled mesh thing you'd have to try and hackily recreate in blueprints or something in UE4, and even better a thing called "Hotspot" materials that's some oddball combination of a geometry mode and spline meshes, where you kind of just select faces and drag and the material automatically UVs and tiles correctly as the mesh is created right then and there. It's also fairly stable and not buggy.

Basically a lot of stuff that could be in other engines, a lot of stuff that used to be in engines and editors back in the day but have slowly disappeared in favor of shunting level design and art passes into one thing. Stuff that Epic has even vaguely promised would come to UE4 one day but hasn't done, a lot of conveniences really. It's not flashy, I can see why programmers aren't jumping to spend all their times working on these things. But they all add up to making actually creating the game easier, more and faster playtesting, etc.
 
Tracing a lower lod than the primary view doesn't work for raytraced shadows, you're going to get obvious light leak close in.
To avoid the leak, my proposed displacement trick should work, also to deal with discrete LOD transitions: https://forum.beyond3d.com/posts/2125852/
(Let me know if you disagree - i have not thought about it in detail)
Fine details would get lost, but there would be no bad artifacts i guess.
As for lod popping, I suspect you can also see the virtualized texture popping as well. Examination of ssd latency shows that even if you optimize the driver stack latency is still probably several frames even at 30fps. Which is exactly why nanite and virtualized texturing can't do tracing. Tracing relies on extremely low latency, you might have one frame to nail that reflection before it shifts dramatically.
Like for anything related to lighting, we need to have the entire scene, ignoring camera orientation and visibility, just depending on its position.
So latency is no issue, replace the chunk of geometry only after the new one is in memory. We just don't want to have CPU driver generating BL BVH each time from scratch for the same static stuff again and again.
On PC we need at least an option to cache it to disk after it has been generated the first time. On console it's surely no problem because there is only one GPU vendor to support.
 
To avoid the leak, my proposed displacement trick should work, also to deal with discrete LOD transitions: https://forum.beyond3d.com/posts/2125852/
(Let me know if you disagree - i have not thought about it in detail)
Fine details would get lost, but there would be no bad artifacts i guess.

Like for anything related to lighting, we need to have the entire scene, ignoring camera orientation and visibility, just depending on its position.
So latency is no issue, replace the chunk of geometry only after the new one is in memory. We just don't want to have CPU driver generating BL BVH each time from scratch for the same static stuff again and again.
On PC we need at least an option to cache it to disk after it has been generated the first time. On console it's surely no problem because there is only one GPU vendor to support.

What I meant was trying to trace into nanite, that's not going to work, so now you've got a separate pipeline, one for standard watertight lods you keep around and one for virtualized geo for the primary view. And while the lod skirt would probably work for switching lods arbitrarily, the scheme won't work close to the ray origin, if the low lod projects triangles above ray origin you get false shadowing, if it's lower you get lightleak. If you start ray biasing you start missing the perfect contact shadows that are a major benefit of raytracing.

Still, maybe you could always guarantee that lower lods project triangles below the highest, or ray bias above the geo a bit, then fill in potential leaks with screenspace shadowing. It'd be another hit to image stability, something I was personally kind of hoping would happen less with this next gen (I'm tired of halos around moving objects). But a tradeoff for realtime is a tradeoff. Though nanite would also essentially have to properly uv and lod everything for automatic bakeout if the benefits for the art pipeline are to be realized. But hell, maybe with however it works that's within the realm of possibility.
 
What I meant was trying to trace into nanite, that's not going to work, so now you've got a separate pipeline, one for standard watertight lods you keep around and one for virtualized geo for the primary view. And while the lod skirt would probably work for switching lods arbitrarily, the scheme won't work close to the ray origin, if the low lod projects triangles above ray origin you get false shadowing, if it's lower you get lightleak. If you start ray biasing you start missing the perfect contact shadows that are a major benefit of raytracing.

Still, maybe you could always guarantee that lower lods project triangles below the highest, or ray bias above the geo a bit, then fill in potential leaks with screenspace shadowing. It'd be another hit to image stability, something I was personally kind of hoping would happen less with this next gen (I'm tired of halos around moving objects). But a tradeoff for realtime is a tradeoff. Though nanite would also essentially have to properly uv and lod everything for automatic bakeout if the benefits for the art pipeline are to be realized. But hell, maybe with however it works that's within the realm of possibility.
Yeah, i'll try it out some time with my CPU tracer after i made some more progress on my auto LOD tools. After UE5 i'm a bit unsure how to continue. I realize i really need to support instances. I've ignored this issue for too long. Requirement for global parametrization sucks :)

Now, after having some better assumptions about Nanite i start to think the demo really shows more detail than necessary. We don't look at things so closely in games, and having less repetition and detail, but more FPS and variation seems the better compromise to me.
Nanite is impressive, but then, looking at images of real nature scenes, i doubt it could ever model this well just by clustering some scanned models. It's just another 'gamey' limitation that becomes obvious after the first wow has settled.
What i really hate in games, mostly seen in UE, is the seams between models, e.g. intersection of ground and rocks. It always looks inconsistent and fake, and no offline pathtraced lighting could help it. Nanite can hide this a bit better with it's insane detail, but the issue is still there.
 
Back
Top