To measure the win of mesh shaders, you'd need to do so on the same architecture, once with and once without using mesh shaders.UE5.1 has mesh shaders. Sadly nobody yet has benchmarked the influence of mesh shaders by comparing the 5700 to the 2060 Super or similar in a recent UE5.1 demo and Fortnite.
Variance shadow maps in Unreal Engine 5 seem to be pretty crucial as I understand it.
Shadow maps are on the way out.
Well, RT does not work with Nanite, nor does it work with mesh shader generated geometry.Yeah mesh shaders do seem quite dead right out of the gate.
Why wouldn't RT work with mesh shaders?Well, RT does not work with Nanite, nor does it work with mesh shader generated geometry.
Guy not knowing tensors are not used for denoising does not make his rumors more creditable, rather the opposite?It seems like RGT does not know tensor cores are not in use for denoising at all, contrary to what Nvidia has had planned, which makes that rumor more creditable in my eyes.
Nvidia probably had no success in moving denoising to an AI neural network running on the tensor cores, which is why they are now working on a fixed function denoiser accelerator. Makes sense and is indeed very interesting!
Because mesh shaders generate geometry temporary on chip, render it, and forget it.Why wouldn't RT work with mesh shaders?
Are you sure that you understand properly what mesh shader is and how it works? For one it doesn't render anything, it outputs triangles for pixel shaders (or as a UAV if no shading is needed).Because mesh shaders generate geometry temprary on chip, render it, and forget it.
Why would storing geometry in VRAM defy the advantage of mesh shaders?So you can't include the output in a BVH build or traveRay(), as this would require to store the geometry in VRAM, defying the advantage of mesh shaders.
No, i'm not sure about any features i haven't used myself yet.Are you sure that you understand properly what mesh shader is and how it works? For one it doesn't render anything, it outputs triangles for pixel shaders (or as a UAV if no shading is needed).
Microsoft does, as shown in the link above.but the whole point of mesh shaders is to stay 'on chip', bypassing VRAM. You disagree?
Well, if you want to process the geometry twice, once fro BVH, and then each frame with mesh shaders - you can do that.But even in this case neither has any blocks on the ability to use the geometry for BVH and RT?
No. The SW rasterizer has nothing to do with the incompatibility. It's even optional.Nanite issue with h/w RT isn't in how the geometry is produced, it's in the fact that this geometry bypass h/w rasterizer I think? It still seems like something which can probably be solved to be usable in RT with more complex compute?
Guy not knowing tensors are not used for denoising does not make his rumors more creditable, rather the opposite?
Denoising seems to much of an open problem to get HW acceleration. The obvious next steps for RT would be HW BVH builder and ray reordering.
Dedicated silicon might be a bit much for BVH construction.
The future should be to stream BVH, imo. Bottom levels could be still built on client to keep storage small. But top levels would profit from high quality offline build.It's what's needed to really push ray tracing and performance forward.
It's what's needed to really push ray tracing and performance forward.
Why wouldn't RT work with mesh shaders?
The obvious next steps for RT would be HW BVH builder and ray reordering.
GB100 is for AI, GB202 is a gaming chip.when those N3 wafers for Nvidia can produce more GB202s for AI
I mean AD102 is basically a "gaming chip" but used in the RTX 6000 which is used in workstations that involve work with AI (like that Nvidia RTX Workstations annouced last year).GB100 is for AI, GB202 is a gaming chip.
I'm also wondering where the idea that wafers of all things are the limiting factor on producing more AI chips has come from? From all we know this isn't true, and there are no indications that there are issues with non-AI chips shipments at the moment.
As for the split in process tech that's something to ponder on I'm sure but it's not clear that there's a benefit to using less advanced process because even at the same price per chip you still get the advantage of more chips from a wafer by going fully onto a more advanced node. It should also be a lot cheaper to put all chips from one family on one node.