Why would i be impressed by the video above when nVidia did it 12 years ago?!
Adding to the above, the tessellation demo uses a low poly base mesh.
If the base mesh is a cube, the tessellated object will likely look like a sphere.
Notice the effect applies smoothing to the base mech, but it does not add detail.
In other words: With increasing tessellation factor we add more vertices. The vertices appear at higher frequency.
But we don't get detail at this same frequency - we only make our sphere smoother and smoother so flat sections become round.
If we want to add detail, we can do so e.g. using displacement mapping. (Seems the demos does this, but did not watch)
Now we can add detail, and we could also express our smoothing function in the height values of the displacement map. Then we get something like NVs new raytraced micromesh stuff, avoiding the need to implement parametric or subdivision surfaces.
That's not bad at all, but it's still very limited.
The problem is that such methods can not reduce beyond the base mesh level. Thus we failed on the biggest goal of having dynamic LOD: We want to reduce any geometry to the detail we actually need.
The base mesh can become pretty high poly for topologically complex objects, and at some distance, any base mesh resolution is too high. So we waste performance and also memory.
Personally i call such methods 'detail amplification' - it allows to add details dynamically, but we can not reduce detail.
That's the problem Nanite solves efficiently. I mean, it's limited to instances of objects, and can not merge many objects to a single one to reduce it further. But for now i see no efficiency issues in practice - we can just cull such very distant objects.
Nanite can not add detail, it is a pure reduction method. It can reduce a (potentially detailed) model to a single digit count of triangles, i guess.
So the NV demo is not the same and solves a different problem.
Using Nanite for traditional geometry content is inefficient and because Nanite doesnt work proper with hardware raytracing you have to use another inefficient GI solution like Lumen...
Why do you say Nanite is inefficient? Have i missed something interesting?
I looked at the code and it seems efficient. Following presentations as well, it seems well thought, solves the hard problems in efficient and elegant ways. It's just good. Exactly what this industry needs.
From what i know about Lumen it's pretty much the opposite. Inefficient and a bunch of hacks. But it's not that others in the industry do better.
My assumption is that Lumen causes the high HW requirements on UE5 games, but not Nanite. But maybe that's a dated impression, because it dates back to discussion about the first demo on PS5.
Notice that the geometry in the Immortals game is not the crazy detail we saw in early UE5 demos. It's average geometry resolution i would say. And this makes it less likely that Nanite is the bottleneck.
Also, please notice Nanite not working with HW RT is entirely a problem RT APIs. It's the APIs which are broken. The guys who designed the APIs did not think dynamic geometry is required, it seems.
But
any dynamic and fine grained LOD solution which aims to reduce details requires dynamic geometry. Thus the API designers have actively prevented a solution to the LOD problem, period.
Intent or not - it's their failure alone, not Epics. As is, RT is currently not future proof, but Nanite is.