Where? Can you share the link?They literally stated that they only use primitive shaders for geometry that is not performant using Nanite (foliage, hair, grass, etc).
Where? Can you share the link?They literally stated that they only use primitive shaders for geometry that is not performant using Nanite (foliage, hair, grass, etc).
They literally stated that they only use primitive shaders for geometry that is not performant using Nanite (foliage, hair, grass, etc).
And there is no reason the Xbox or PC couldn't also use primitive shaders/mesh shaders for that geometry as well.
In one of the presentations they had a slide stating that only 10% of the geometry being rendered in the demo was using primitive shaders or something along those lines. The same slide stated that Nanite only works for rigid geometry, at least for now. And I think in the Digital Foundry article after the demo was released they quoted somebody at Epic saying that primitive shaders is a fallback rasterization technique for geometry that Nanite can't really handle well. This info is out there.Where? Can you share the link?
https://wccftech.com/lumen-gi-uses-...nly-nanite-exploits-primitive-shaders-on-ps5/The vast majority of triangles are software rasterised using hyper-optimised compute shaders specifically designed for the advantages we can exploit. As a result, we've been able to leave hardware rasterisers in the dust at this specific task. Software rasterisation is a core component of Nanite that allows it to achieve what it does. We can't beat hardware rasterisers in all cases though so we'll use hardware when we've determined it's the faster path. On PlayStation 5 we use primitive shaders for that path which is considerably faster than using the old pipeline we had before with vertex shaders.
The quote referencing the PlayStation’s shaders is ambiguous. You can take it to mean that:
- the PS5s primitive shaders are used when the hardware path beats the software path if they have a solution where they do some parts with compute shaders and others with hardware rasterizers within the same engine on the same hardware.
Or
- the PS5 is an example of the hardware path being the faster option and everything on PS5 uses the primitive shaders in hardware.
The calculations above, if true, as well as the potential complexity of mixing and matching in software, probably suggests the latter?
Nanite and mesh shaders are working at a different level of abstraction.
Mesh shaders are about a new way of feeding geometry to the rasterizer, allowing cooperative multitasking similar to compute in the graphics pipeline (which is one of the biggest constraint of the traditional pipeline, every thread is independent and cannot use any information produced by the others), and avoiding some fixed-function h/w blocks to become the bottleneck with arbitrary amount of geometry.
Nanite is about how you handle and process geometry in an efficient way, and decide *what* to send to the rasterizer in the first place. If tris are big enough, you feed them to the h/w rasterizer through mesh/primitive shaders (not a requirement tho, you could use the traditional pipeline as well), if they're 1 pixel-sized or smaller, you feed it the a software-based compute rasterizer (which can be faster than the h/w).
In a sense Nanite is both a client and a superset of mesh shaders, and does lod management and tris classification on top to decided what's the optimal path for each.
Primitive shaders is AMD's name for the h/w blocks to implement Mesh shaders (which is also both Nvidia name for the h/w feature, and DirectX name for the API functionality).
Somewhat confusingly, AMD decided to stick with the Primitive shaders name they introduced in Vega GPUs, which was a proto-Mesh shader functionality, but less flexible and not intended to be exposed to developers, but used by the driver automatically transforming Vertex/Geometry/Tessellation workloads into Primitive shaders.
That never really worked (outside of synthetic benchmarks and non-gaming applications), but the hardware was extended and re-purposed to implement Mesh shaders in RDNA, which are explicitly exposed and left for developers to implement.
In practice, there are no public details about the differences of Nvidia and AMD implementations of Mesh shaders, and there's no real use case as adoption in games is gonna be even slower than ray tracing.
Pretty sure even with the Nanite, UE5 uses deferred shading. Triangle sizes don't matter with deferred shading since all pixel shading (compute or via pixel shaders) happens for the full screen quad anyway with attributes being fetched from already rasterized geometry buffer (normals, albedo, roughtness, etc).I believe using sw rasterization refers to those 1 pixel or less sized triangles that become very inefficient to shade with hw.
Doubtful.Anyone think there is a chance that they release last year's demo tomorrow?
new aa solution, temporal super resolution keeps up with all this new geometric details to create sharper more stable image than before with qualty approaching true native 4k at a cost of 1080p
new aa solution, temporal super resolution keeps up with all this new geometric details to create sharper more stable image than before with qualty approaching true native 4k at a cost of 1080p