Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Nvidia's RT cores now have a cluster decompression function. I wonder if the decompression is flexible to different formats, or if there's a particular format that gets pushed as a standard. Really interested to see how this plays out with Nanite. Is this format designed to be similar enough to Nanite that these clusters can be decompressed by the RT cores? Maybe some conversion needed, or maybe changes to Nanite coming? Will be cool to find out. The new RT cores also do cluster intersection on top of triangle intersection.


Got to a further page from the link:

Games have grown exponentially in geometric complexity, with scenes now comprising billions of polygons. Blackwell addresses this with RTX Mega Geometry, enabling developers to use high-resolution meshes like those from Unreal Engine 5's Nanite directly within ray-traced scenes. This eliminates the need for low-resolution proxy meshes, preserving visual detail while optimizing performance through efficient compression and clustering.
 
Last edited:
Nvidia's RT cores now have a cluster decompression function. I wonder if the decompression is flexible to different formats, or if there's a particular format that gets pushed as a standard. Really interested to see how this plays out with Nanite. Is this format designed to be similar enough to Nanite that these clusters can be decompressed by the RT cores? Maybe some conversion needed, or maybe changes to Nanite coming? Will be cool to find out. The new RT cores also do cluster intersection on top of triangle intersection.


Got to a further page from the link:
I wonder if this has any relation to AMD's Dense Geometry Format, which was specifically designed with Nanite in mind.
 
I wonder if this has any relation to AMD's Dense Geometry Format, which was specifically designed with Nanite in mind.

Maybe. I'm curious on where this landed. Nanite already had its own compression format for clusters, and I think I remember Brian Karis commenting on the AMD one saying it looked useful. Maybe there's been some agreement behind the scenes between players like Epic, AMD and Nvidia to standardize things enough that hardware decompression is viable. Not sure. It seems like an area where some standardization would be useful.
 
Dev interview on Eternal Strands, which reminds me a bit of that Ubisoft game that had a name so bad I can't remember it.

Despite having stylised graphics they're still using Nanite/Lumin to speed up production. It'll be interesting to see how the console versions turn out.


 
@Andrew Lauritzen Any chance you are able to speak on how helpful Mega Geometry is to the problems you have discussed WRT tracing against highly detailed geometry?
Usual caveats - I haven't played with the demo they showed and I can't get into specifics.

That said, it's definitely the direction of API changes that are needed to make RT apply to a much broader range of cases. Performance across a range of hardware is still TBD, but I think it's fair to assume this is a strict improvement on the current APIs on all hardware. It's great to see clusters of triangles being treated as the fundamental primitive of rendering, and LOD being done at that level rather than entire instances. It's also great to see acknowledgement that high geometric complexity does matter, and things like alpha cutouts need to phase out. This is exactly the direction Nanite has been pushing. As I've noted in the past, I do expect raytracing to sort of follow in the direction that Nanite has gone because everything that is a problem for Nanite is at least as large a problem for RT (and then RT adds a few more on top).

I'm curious to see more about the tessellation/displacement stuff in the dragon demo they showed. Obviously they've gone a bit of a different direction than their past efforts in that space which I think is definitely a good thing. Nanite of course has added its own tessellation and displacement stuff in the mean time so it will be interesting to see how similar or different things are there. It would of course be great if they are compatible from a content perspective - given that it's probably a prerequisite to any sort of mass use at this point in time I would imagine that was at least a consideration of the design.
 
Usual caveats - I haven't played with the demo they showed and I can't get into specifics.

That said, it's definitely the direction of API changes that are needed to make RT apply to a much broader range of cases. Performance across a range of hardware is still TBD, but I think it's fair to assume this is a strict improvement on the current APIs on all hardware. It's great to see clusters of triangles being treated as the fundamental primitive of rendering, and LOD being done at that level rather than entire instances. It's also great to see acknowledgement that high geometric complexity does matter, and things like alpha cutouts need to phase out. This is exactly the direction Nanite has been pushing. As I've noted in the past, I do expect raytracing to sort of follow in the direction that Nanite has gone because everything that is a problem for Nanite is at least as large a problem for RT (and then RT adds a few more on top).

I'm curious to see more about the tessellation/displacement stuff in the dragon demo they showed. Obviously they've gone a bit of a different direction than their past efforts in that space which I think is definitely a good thing. Nanite of course has added its own tessellation and displacement stuff in the mean time so it will be interesting to see how similar or different things are there. It would of course be great if they are compatible from a content perspective - given that it's probably a prerequisite to any sort of mass use at this point in time I would imagine that was at least a consideration of the design.

AFAIK the displacement maps are the same ones that have been around for a while, microdisplacement where triangles are tessellated individually (no t-junctions), AMD put out a paper on animating these late last year: https://gpuopen.com/download/publications/AnimatedDMMs.pdf

Problem is that a good amount of the "we're able to move all these triangles around in a BVH" comes from encoding most of these into displacement maps rather than API changes. And of course is that this isn't available in PS5, or Xbox, or Switch 2 as far as I know (when did Nvidia add this I wonder, pretty sure it was with 4xxx). And those are going to be the targets for a long, long while. Brian mentioned something about voxel research starting up again for UE5, which I'm definitely interested in. I'd been contemplating putting voxels into a tetahedron mesh, then seeing if I could skin the tet mesh and displace the voxel field inside to get skinning: low poly BVH and triangle interesection, high geometry density from voxel representation, which you can encode in BC for compression! But if Brian and crew are doing something around that area I'm more than happy to let the professionals do it.

BTW here's a shipping version of "megalights" with screenspace shadows because of the title/hardware requirements, but some great ideas and execution from Tomasz on TinyGlade: https://bsky.app/profile/h3r2tic.bsky.social/post/3lclort5v222a

Personally I would say the thing missing from MegaLights so far, including TinyGlade but hey that's hardware targets for you, is really large light radii. Looking at a street at night and streetlights pretty obviously cast shadows 10+ meters out. And yeah that means overlapping lights and I can't see a way around that just increasing computation cost a lot. Still the obvious cutoff of lights is one of those things that bothers me in games.
 
Last edited:
Back
Top