Games have grown exponentially in geometric complexity, with scenes now comprising billions of polygons. Blackwell addresses this with RTX Mega Geometry, enabling developers to use high-resolution meshes like those from Unreal Engine 5's Nanite directly within ray-traced scenes. This eliminates the need for low-resolution proxy meshes, preserving visual detail while optimizing performance through efficient compression and clustering.
I wonder if this has any relation to AMD's Dense Geometry Format, which was specifically designed with Nanite in mind.Nvidia's RT cores now have a cluster decompression function. I wonder if the decompression is flexible to different formats, or if there's a particular format that gets pushed as a standard. Really interested to see how this plays out with Nanite. Is this format designed to be similar enough to Nanite that these clusters can be decompressed by the RT cores? Maybe some conversion needed, or maybe changes to Nanite coming? Will be cool to find out. The new RT cores also do cluster intersection on top of triangle intersection.
NVIDIA GeForce RTX 50 Technical Deep Dive
In this article we cover everything NVIDIA revealed about the GeForce RTX 50 Series: the new graphics card models and their pricing, Blackwell architecture, DLSS 4 updates, Neural Rendering, Reflex 2 for faster headshots, improved AI performance and creator-focused tools.www.techpowerup.com
Got to a further page from the link:
I wonder if this has any relation to AMD's Dense Geometry Format, which was specifically designed with Nanite in mind.
It has no relation to AMD’s format, which is lossy, whereas the HW compression for MG clusters in Blackwell is lossless.I wonder if this has any relation to AMD's Dense Geometry Format, which was specifically designed with Nanite in mind.
Usual caveats - I haven't played with the demo they showed and I can't get into specifics.@Andrew Lauritzen Any chance you are able to speak on how helpful Mega Geometry is to the problems you have discussed WRT tracing against highly detailed geometry?
Usual caveats - I haven't played with the demo they showed and I can't get into specifics.
That said, it's definitely the direction of API changes that are needed to make RT apply to a much broader range of cases. Performance across a range of hardware is still TBD, but I think it's fair to assume this is a strict improvement on the current APIs on all hardware. It's great to see clusters of triangles being treated as the fundamental primitive of rendering, and LOD being done at that level rather than entire instances. It's also great to see acknowledgement that high geometric complexity does matter, and things like alpha cutouts need to phase out. This is exactly the direction Nanite has been pushing. As I've noted in the past, I do expect raytracing to sort of follow in the direction that Nanite has gone because everything that is a problem for Nanite is at least as large a problem for RT (and then RT adds a few more on top).
I'm curious to see more about the tessellation/displacement stuff in the dragon demo they showed. Obviously they've gone a bit of a different direction than their past efforts in that space which I think is definitely a good thing. Nanite of course has added its own tessellation and displacement stuff in the mean time so it will be interesting to see how similar or different things are there. It would of course be great if they are compatible from a content perspective - given that it's probably a prerequisite to any sort of mass use at this point in time I would imagine that was at least a consideration of the design.
I suspect it's actually something different, but we'll see when they give out more details I guess.AFAIK the displacement maps are the same ones that have been around for a while, microdisplacement where triangles are tessellated individually (no t-junctions), AMD put out a paper on animating these late last year: https://gpuopen.com/download/publications/AnimatedDMMs.pdf
Honestly the more people experimenting with things the better! They/we read all the papers and articles that come out in the area as well of course - we really need a broad research space, not one centralized on the people and priorities of a small number of companies.But if Brian and crew are doing something around that area I'm more than happy to let the professionals do it.
Yes I'm looking forward to firing that up again to see it! I remember him talking about that a bit earlier but good to see it go in.BTW here's a shipping version of "megalights" with screenspace shadows because of the title/hardware requirements, but some great ideas and execution from Tomasz on TinyGlade: https://bsky.app/profile/h3r2tic.bsky.social/post/3lclort5v222a
Agreed. MegaLights and ReSTIR support large light radii but it can introduce a fair amount of noise if lots of lights with relevant contributions overlap. To be fair, moderate amounts of overlap is not a huge issue, depending on contributions. There was certainly a lot more overlap in the MegaLights demo for instance than the street light case you describe.Personally I would say the thing missing from MegaLights so far, including TinyGlade but hey that's hardware targets for you, is really large light radii. Looking at a street at night and streetlights pretty obviously cast shadows 10+ meters out. And yeah that means overlapping lights and I can't see a way around that just increasing computation cost a lot. Still the obvious cutoff of lights is one of those things that bothers me in games.